This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Less Wrong: Open Thread, September 2010
New Comment
628 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It seems to me, based on purely anecdotal experience, that people in this community are unusually prone to feeling that they're stupid if they do badly at something. Scott Adams' The Illusion of Winning might help counteract becoming too easily demotivated.

Let's say that you and I decide to play pool. We agree to play eight-ball, best of five games. Our perception is that what follows is a contest to see who will do something called winning.

But I don't see it that way. I always imagine the outcome of eight-ball to be predetermined, to about 95% certainty, based on who has practiced that specific skill the most over his lifetime. The remaining 5% is mostly luck, and playing a best of five series eliminates most of the luck too.

I've spent a ridiculous number of hours playing pool, mostly as a kid. I'm not proud of that fact. Almost any other activity would have been more useful. As a result of my wasted youth, years later I can beat 99% of the public at eight-ball. But I can't enjoy that sort of so-called victory. It doesn't feel like "winning" anything.

It feels as meaningful as if my opponent and I had kept logs of the hours we each had spent playing pool over our lifeti

... (read more)
8[anonymous]
I suspect this is a result of the tacit assumption that "if you're not smart enough, you don't belong at LW". If most members are anything like me, this combined with the fact that they're probably used to being "the smart one" makes it extremely intimidating to post anything, and extremely de-motivational if they make a mistake. In the interests of spreading the idea that it's ok if other people are smarter than you, I'll say that I'm quite certainly one of the less intelligent members of this community. Practice and expertise tend to be domain-specific - Scott isn't any better at darts or chess after playing all that pool. Even learning things like metacognition tend not to apply outside of the specific domain you've learned it in. Intelligence is one of the only things that gives you a general problem solving/task completion ability.
1xax
Only if you've already defined intelligence as not domain-specific in the first place. Conversely, meta-cognition about a person's own learning processes could help them learn faster in general, which has many varied applications.
7jimrandomh
This is certainly true of me, but I try to make sure that the positive feeling of having identified the mistakes and improved outweighs the negative feeling of having needed the improvement. Tsuyoku Naritai!
5Daniel_Burfoot
I think the relative contribution of intelligence vs. practice varies substantially depending on the nature of the particular task. A key problem is to identify tasks as intelligence-dominated (the smart guy always wins) vs. practice-dominated (the experienced guy always wins). As a first observation about this problem, notice that clearly definable or objective tasks (chess, pool, basketball) tend to be practice-dominated, whereas more ambiguous tasks (leadership, writing, rationality) tend to be intelligence-dominated.
3Kaj_Sotala
This is true. Intelligence research has shown that intelligence is more useful for more complex tasks, see e.g. Gottfredson 2002.
4[anonymous]
I like this anecdote. I never valued intelligence relative to practice, thanks to an upbringing that focused pretty heavily on the importance of effort over talent. I'm more likely to feel behind, insufficiently knowledgeable to the point that I'm never going to catch up. I don't see why it's necessarily a cheerful observation that practice makes a big difference to performance. It just means that you'll never be able to match the person who started earlier.
2Houshalter
Make them play some kind of simplified RPG until they realise the only achievement is how much time they put into doing mindless repetitive tasks.

Make them play some kind of simplified RPG until they realise the only achievement is how much time they put into doing mindless repetitive tasks.

I imagine lots of kids play Farmville already.

4Kaj_Sotala
Those games don't really improve any sort of skill, though, and neither does anyone expect them to. To teach kids this, you need a game where you as a player pretty much never stop improving, so that having spent more hours on the game actually means you'll beat anyone who has spent less. Go might work.
6rwallace
There are schools that teach Go intensively from an early age, so that a 10-year-old student from one of those schools is already far better than a casual player like me will ever be, and it just keeps going up from there. People don't seem to get tired of it. Every time I contemplate that, I wish all the talent thus spent, could be spent instead on schools providing similarly intensive teaching in something useful like science and engineering. What could be accomplished if you taught a few thousand smart kids to be dan-grade scientists by age 10 and kept going from there? I think it would be worth finding out.
3Christian_Szegedy
I agree with you. I also think that there are several reasons for that: First that competitive games are (intellectual or physical sports) easier to select and train for, since the objective function is much clearer. The other reason is more cultural: if you train your child for something more useful like science or mathematics, then people will say: "Poor kid, do you try to make a freak out of him? Why can't he have a childhood like anyone else?" Traditionally, there is much less opposition against music, art or sport training. Perhaps they are viewed as "fun activities." Thirdly, it also seems that academic success is the function of more variables: communication skills, motivation, perspective, taste, wisdom, luck etc. So early training will result in much less head start than in a more constrained area like sports or music, where it is almost mandatory for success (age of 10 (even 6) are almost too late in some of those areas to begin seriously)
3NihilCredo
A somewhat related, impactful graph. Of course, human effort and interest is far from perfectly fungible. But your broader point retains a lot of validity.
-1Houshalter
Yes, but what would it matter if 200 billion hours was spent refining wikipedia? There is only so much knowledge you can pump into it. I don't think that's a fair comparison.
3AdeleneDawner
So what else could we also accomplish? I didn't read it as 'wikipedia could be 2,000 times better', but 'we could have 2,000 wikipedia-grade resources'. (Which is probably also not true - we'd run out of low-hanging fruit. Still.)
0timtyler
Go is useful, I figure. As games go, it is one of the best. Perhaps computer games will one day surpass it - but, in many ways, that has happened yet.
5Sniffnoy
There's a large difference between the "leveling up" in such games, where you gain new in-game capabilities, and actually getting better, where your in-game capabilities stay the same but you learn to use them more effectively. ETA: I guess perhaps a better way of saying it is, there's a large difference between the causal chains time->winning, and time->skill->winning.
1Jonathan_Graehl
I'm guilty of a sort of fixation on IQ (not actual scores or measurements of it). I have an unhealthy interest in food, drugs and exercises (physical and mental) that are purported to give some incremental improvement. I see this in quite a few folks here as well. To actually accomplish something, more important than these incremental IQ differences are: effective high-level planning and strategy, practice, time actually spent trying, finding the right collaborators, etc. I started playing around with some IQ-test-like games lately and was initially a little let down with how low my performance (percentile, not absolute) was on some tasks at first. I now believe that these tasks are quite specifically-trainable (after a few tries, I may improve suddenly, but after that I can, but choose not to, steadily increase my performance with work), and that the population actually includes quite a few well-practiced high-achievers. At least, I prefer to console myself with such thoughts. But, seeing myself scored as not-so-smart in some ways, I started to wonder what difference it makes to earn a gold star that says you compute faster than others, if you don't actually do anything with it. Most people probably grow out of such rewards at a younger age than I did.
1Wei Dai
I'm not sure I agree with that. In what areas do you see overvalue of intelligence relative to practice and why do you think there really is overvalue in those areas? I've noticed for example that people's abilities to make good comments on LW do not seem to improve much with practice and feedback from votes (beyond maybe the first few weeks or so). Does this view represent an overvalue of intelligence?
7Kaj_Sotala
I should probably note that my overvaluing of intelligence is more of an alief than a belief. Mostly it shows up if I'm unable to master (or at least get a basic proficiency in) a topic as fast as I'd like to. For instance, on some types of math problems I get quickly demotivated and feel that I'm not smart enough for them, when the actual problem is that I haven't had enough practice on them. This is despite the intellectual knowledge that I could master them, if I just had a bit more practice. That sounds about right, though I would note that there's a huge amount of background knowledge that you need to absorb on LW. Not just raw facts, either, but ways of thinking. The lack of improvement might partially be because some people have absorbed that knowledge when they start posting and some haven't, and absorbing it takes such a long time that the improvement happens too slowly to notice.
3wedrifid
That's interesting. I hadn't got that impression but I haven't looked too closely at such trends either. There are a few people whose comments have improved dramatically but the difference seems to be social development and and not necessarily their rational thinking - so perhaps you have a specific kind of improvement in mind. I'm interested in any further observations on the topic by yourself or others.

An Alternative To "Recent Comments"

For those who may be having trouble keeping up with "Recent Comments" or finding the interface a bit plain, I've written a Greasemonkey script to make it easier/prettier. Here is a screenshot.

Explanation of features:

  • loads and threads up to 400 most recent comments on one screen
  • use [↑] and [↓] to mark favored/disfavored authors
  • comments are color coded based on author/points (pink) and recency (yellow)
  • replies to you are outlined in red
  • hover over [+] to view single collapsed comment
  • hover over/click [^] to highlight/scroll to parent comment
  • marks comments read (grey) based on scrolling
  • shows only new/unread comments upon refresh
  • date/time are converted to your local time zone
  • click comment date/time for permalink

To install, first get Greasemonkey, then click here. Once that's done, use this link to get to the reader interface.

ETA: I've placed the script is in the public domain. Chrome is not supported.

6Wei Dai
Here's something else I wrote a while ago: a script that gives all the comments and posts of a user on one page, so you can save them to a file or search more easily. You don't need Greasemonkey for this one, just visit http://www.ibiblio.org/weidai/lesswrong_user.php I put in a 1-hour cache to reduce server load, so you may not see the user's latest work.
0NihilCredo
May I suggest submitting the script to userscripts.org? It will make it easier for future LessWrong readers to find it, as well as detectable by Greasefire.
0ata
Nice! Thanks. Edit: "shows only new/unread comments upon refresh" — how does it determine readness?
0Wei Dai
Any comment that has been scrolled off the screen for 5 seconds is considered read. (If you scroll back, you can see that the text and border have turn from black to gray.) If you scroll to the bottom and stay there for 5 seconds, all comments are marked read.
0andreas
Thanks for coding this! Currently, the script does not work in Chrome (which supports Greasemonkey out of the box).
2Wei Dai
From http://dev.chromium.org/developers/design-documents/user-scripts My script uses 4 out of these 6 features, and also cross-domain GM_xmlhttpRequest (the comments are actually loaded from a PHP script hosted elsewhere, because LW doesn't seem to provide a way to grab 400 comments at once), so it's going to have to stay Firefox-only for the time being. Oh, in case anyone developing LW is reading this, I release my script into the public domain, so feel free to incorporate the features into LW itself.
0Morendil
Would you consider making display of author names and points a toggle and hidden by default, à la Anti-Kibitzer?
2Wei Dai
I've added some code to disable the author/points-based coloring when Anti-Kibitzer is turned on in your account preferences. (Names and points are already hidden by the Anti-Kibitzer.) Here is version 1.0.1. More feature requests or bug reports are welcome.
0[anonymous]
Sounds fantastic! Err... but the link is broken.

Not sure what the current state of this issue is, apologies if it's somehow moot.

I would like to say that I strongly feel Roko's comments and contributions (save one) should be restored to the site. Yes, I'm aware that he deleted them himself, but it seems to me that he acted hastefully and did more harm to the site than he probably meant to. With his permission (I'm assuming someone can contact him), I think his comments should be restored by an admin.

Since he was such a heavy contributor, and his comments abound(ed) on the sequences (particularly Metaethics, if memory serves), it seems that a large chunk of important discussion is now full of holes. To me this feels like a big loss. I feel lucky to have made it through the sequences before his egress, and I think future readers might feel left out accordingly.

So this is my vote that, if possible, we should proactively try to restore his contributions up to the ones triggering his departure.

6Vladimir_Nesov
He did give a permission to restore the posts (I didn't ask about comments), when I contacted him originally. There remains the issue of someone being technically able to restore these posts.
6matt
We have the technical ability, but it's not easy. We wouldn't do it without Roko's and Eliezer's consent, and a lot of support for the idea. (I wouldn't expect Eliezer to consent to restoring the last couple of days of posts/comments, but we could restore everything else.)
5wedrifid
It occurs to me that there is a call for someone unaffiliated to maintain a (scraped) backup of everything that is posted in order to prevent such losses in the future.
2Douglas_Knight
Surely it would be easy to restore just Roko's posts, leaving his comments dead. Also, if you don't end up restoring them, it's rather awkward that he's in the top contributors list, with a practically dead link.
2matt
It's doable. Are you now talking to the wrong person? [ETA: Sorry - reading that back it was probably rude - I meant to say something closer to "It's doable, but I still need Eliezer's okay before I'll do anything."]
7Eliezer Yudkowsky
Okay granted. I also think this would be a good idea. Actually, I'd be against having an easy way to delete all contributions in general, it's too easy to wreck things that way.
9wedrifid
Are you saying that the only person who should be conveniently able to remove other people's contributions is you? People's comments are their own. It is unreasonable to leave them up if they choose not to. Fortunately, things that have been posted on the internet tend to be hard to destroy. Archives can be created and references made to material that has been removed (for example, see RationalWiki). This means that a blogger can not expect to be able to remove their words from the public record even though they can certainly stop publishing it themselves, removing their ongoing implied support of those words. I actually do support keeping an archive of contributions and it would be convenient if LW had a way to easily restore lost content. It would have to be in a way that was either anonymized ("deleted user"?) or gave some clear indication that the post is by "past-Roko", or "archived-Roko" rather than pretending that it is by the author himself, in the present tense. That is, it would acknowledge the futility of deleting information on the internet but maintain common courtesy to the author. There is no need to disempower the author ourselves by removing control over their own account when the very nature of the internet makes the deleting efforts futile anyway.
4XiXiDu
What is necessary is just that EY thinks about a way how to tell people why something had to be deleted without referring to what has been deleted in detail and why they should trust him on that. I see that freedom of speech has to end somewhere. Are we going to publish detailed blueprints for bio weapons? No. I just don't see how EY wants to accomplish that as in the case of the Roko incident you cannot even talk about what has been deleted in a abstract way. Convince me not to spread the original posts and comments as much as I can? How are you going to do that? I already posted another comment yesterday with the files that I deleted again after thinking about it. This is just too far and fuzzy for me to not play with the content in question without thinking twice. What I mean is that I personally have no problem with censorship, if I can see why it had to be done.
8khafra
I've been thinking about it by moving domains: Imagine that, instead of communicating by electromagnetic or sound waves, we encoded information into the DNA of custom microbes and exchanged them. Would there be any safe way to talk about even the specifics of why a certain bioweapon couldn't be discussed? I don't think there is. At some point in weaponized conversation, there's a binary choice between inflicting it on people and censoring it.
8Alicorn
Like the descoladores!
0khafra
Hah, I didn't realize someone else had already imagined it. Generalizing from multiple, independently-generated fictional evidence?
3XiXiDu
Awesome reply, thanks :-) Didn't know about this either, thanks Alicorn. I wonder how the SIAI is going to resolve that problem if it caused nightmares inside the SIAI itself. Is EY going to solve it all by himself? If he was going to discuss it, then with whom, since he doesn't know who's strong enough beforehand? That's just crazy. Time will end soon anyway, so why worry I guess. Bye.
0David_Gerard
And not just the example you cite. RationalWiki has written an entire MediaWiki extension specifically for the purpose of saving snapshots of Web pages, as people trying to cover their tracks happens a lot on some sites we run regular news pages on (Conservapedia, Citizendium). Memory holing gets people really annoyed, because it's socially extremely rude. It's the same problem as editing a post to make a commentator look foolish. There may be general good reasons for memory holing, but it must be done transparently - there is too much precedent for presuming bad faith unless otherwise proven.
0gwern
Seems like a heavy-weight solution. I'd just use http://webcitation.org/ (probably combined with my little program, archiver).
0David_Gerard
A simple mechanism to put the saved evidence in the same place as the assertions concerning it, rather than out in the cloud, is not onerous in practice. Mind you, most of the disk load for RW is the images ...
[-]homunq190

I had a top-level post which touched on an apparently-forbidden idea downvoted to a net of around -3 and then deleted. This left my karma pinned (?) at 0 for a few months. I am not sure of the reasons for this, but suspect that the forbidden idea was partly to blame.

My karma is now back up to where I could make a top-level post. Do people think that a discussion forum on the moderation and deletion policies would be beneficial? I do, even if we all had to do silly dances to avoid mentioning the specifics of any forbidden idea(s). In my opinion, such dances are both silly and unjustified; but I promise that I'd do them and encourage them if I made such a post, out of respect for the evident opinions of others, and for the asymmetrical (though not one-sided) nature of the alleged danger.

I would not be offended if someone else "took the idea" and made such a post. I also wouldn't mind if the consensus is that such a post is not warranted. So, what do you think?

Do people think that a discussion forum on the moderation and deletion policies would be beneficial?

I would like to see a top-level post on moderation policy. But I would like for it to be written by someone with moderation authority. If there are special rules for discussing moderation, they can be spelled out in the post and commenters can abide by them.

As a newcomer here, I am completely mystified by the dark hints of a forbidden topic. Every hypothesis I can come up with as to why a topic might be forbidden founders when I try to reconcile with the fact that the people doing the forbidding are not stupid.

Self-censorship to protect our own mental health? Stupid. Secrecy as a counter-intelligence measure, to safeguard the fact that we possess some counter-measure capability? Stupid. Secrecy simply because being a member of a secret society is cool? Stupid, but perhaps not stupid enough to be ruled out. On the other hand, I am sure that I haven't thought of every possible explanation.

It strikes me as perfectly reasonable if certain topics are forbidden because discussion of such topics has historically been unproductive, has led to flame wars, etc. I have been wandering around the internet long enough to understand and even appreciate somewhat arbitrary, publicly announced moderation policies. But arbitrary and secret policies are a prescription for resentment and for time wasted discussing moderation policies.

Edit: typo correction - insert missing words

6wnoise
My gloss on it is that this is at best a minor part, though it figures in. The topic is an idea that has horrific implications that are supposedly made more likely the more one thinks about it. Thinking about it in order to figure out what it may be is a bad idea because you may come up with something else. And if the horrific is horrific enough, even a small rise in the probability of it happening would be very bad in expectation. More explaining why many won't think it dangerous at all. This doesn't directly point anything out, but any details do narrow the search-space: V fnl fhccbfrqyl orpnhfr lbh unir gb ohl va gb fbzr qrpvqrqyl aba-znvafgernz vqrnf gung ner pbzzba qbtzn urer. I personally don't buy this, and think the censorship is an overblown reaction. Accepting it is definitely not crazy, however, especially given the stakes, and I'm willing to self-censor to some degree, even though I hate the heavy-handed response.
9cata
Another perspective: I read the forbidden idea, understood it, but I have no sense of danger because (like the majority of humans) I don't really live my life in a way that's consistent with all the implications of my conscious rational beliefs. Even though it sounded like a convincing chain of reasoning to me, I find it difficult to have a personal emotional reaction or change my lifestyle based on what seem to be extremely abstract threats. I think only people who are very committed rationalists would find that there are topics like this which could be mental health risks. Of course, that may include much of the LW population.

How about an informed consent form:

  • (1) I know that the SIAI mission is vitally important.
  • (2) If we blow it, the universe could be paved with paper clips.
  • (3) Or worse.
  • (4) I hereby certify that points 1 & 2 do not give me nightmares.
  • (5) I accept that if point 3 gives me nightmares that points 1 and 2 did not give me, then I probably should not be working on FAI and should instead go find a cure for AIDS or something.
1Snowyowl
I feel you should detail point (1) a bit more (explain in more detail what the SIAI intends to do), but I agree with the principle. Upvoted.
1wedrifid
I like it! Although 5 could be easily replaced by "Go earn a lot of money in a startup, never think about FAI again but still donate money to SIAI because you remember that you have some good reason to that you don't want to think about explicitly."
7Kaj_Sotala
I read the idea, but it seemed to have basically the same flaw as Pascal's wager does. On that ground alone it seemed like it shouldn't be a mental risk to anyone, but it could be that I missed some part of the argument. (Didn't save the post.)
5timtyler
My analysis was that it described a real danger. Not a topic worth banning, of course - but not as worthless a danger as the one that arises in Pascal's wager.
0homunq
I think that, even if this is a minor part of the reasoning for those who (unlike me) believe in the danger, it could easily be the best, most consensus* basis for an explicit deletion policy. I'd support such a policy, and definitely think a secret policy is stupid for several reasons. *no consensus here will be perfect.
3homunq
I think it's safe to tell you that your second two hypotheses are definitely not on the right track.

If there's just one topic that's banned, then no. If it's increased to 2 topics - and "No riddle theory" is one I hadn't heard before - then maybe. Moderation and deletion is very rare here.

I would like moderation or deletion to include sending an email to the affected person - but this relies on the user giving a good email address at registration.

8Emile
I'm pretty sure that "riddle theory" is a reference to Roko's post, not a new banned topic.
5homunq
My registration email is good, and I received no such email. I can also be reached under the same user name using English wikipedia's "contact user" function (which connects to the same email.) Suggestions like your email idea would be the main purpose of having the discussion (here or on a top-level post). I don't think that some short-lived chatter would change a strongly-held belief, and I have no desire nor capability of unseating the benevolent-dictator-for-life. However, I think that any partial steps towards epistemic glasnost, such as an email to deleted post authors or at least their ability to view the responses to their own deleted post, would be helpful.
6xamdam
Yes. I think that lack of policy 1) reflects poorly on the objectivity of moderators, even if in appearance only 2) diverts too much energy into nonproductive discussions.
4Relsqui
As a moderator of a moderately large social community, I would like to note that moderator objectivity is not always the most effective way to reach the desired outcome (an enjoyable, productive community). Yes, we've compiled a list of specific actions that will result in warnings, bans, and so forth, but someone will always be able to think of a way to be an asshole which isn't yet on our list--or which doesn't quite match the way we worded it--or whatever. To do our jobs well, we need to be able to use our judgment (which is the criterion for which we were selected as moderators). This is not to say that I wouldn't like to see a list of guidelines for acceptable and unacceptable LW posts. But I respect the need for some flexibility on the editing side.
0NancyLebovitz
Any thoughts about whether there are differences between communities with a lot of specific rules and those with a more general "be excellent to each other" standard?
4Relsqui
That's a really good question; it makes me want to do actual experiments with social communities, which I'm not sure how you'd set up. Failing that, here are some ideas about what might happen: Moderators of a very strictly rule-based community might easily find themselves in a walled garden situation just because their hands are tied. (This is the problem we had in the one I mentioned, before we made a conscious decision to be more flexible.) If someone behaves poorly, they have no justification to wield to eject that person. In mild cases they'll tolerate it; in major cases, they'll make an addition to the rules to cover the new infraction. Over time the rules become an unwieldy tome, intimidating users who want to behave well, reducing the number of people who actually read them, and increasing the chance of accidental infractions. Otherwise-useful participants who make a slip get a pass, leading to cries of favoritism from users who'd had the rules brought down on them before--or else they don't, and the community loses good members. This suggests a corollary of my earlier admonition for flexibility: What written rules there are should be brief and digestible, or at least accompanied by a summary. You can see this transition by comparing the long form of one community's rules, complete with CSS and anchors that let you link to a specific infraction, and the short form which is used to give new people a general idea of what's okay and not okay. The potential flaw in the "be excellent to each other" standard is disagreement about what's excellent--either amongst the moderators, or between the moderators and the community. For this reason, I'd expect it to work better in smaller communities with fewer of either. (This suggests another corollary--smaller communities need fewer written rules--which I suspect is true but with less confidence than the previous one.) If the moderators disagree amongst themselves, users will rightly have no idea what's okay and isn't;
5[anonymous]
A minute in Konkvistador's mind:
1thomblake
I do have access to the forbidden post, and have no qualms about sharing it privately. I actually sought it out actively after I heard about the debacle, and was very disappointed when I finally got a copy to find that it was a post that I had already read and dismissed. I don't think there's anything there, and I know what people think is there, and it lowered my estimation of the people who took it seriously, especially given the mean things Eliezer said to Roko.
5[anonymous]
Can I haz evil soul crushing idea plz? But to be serious, yes if I find the idea is foolish, the people who take it seriously, this reduces my optimism as well, just as much as malice on the part of the Lesswrong staff or just plain real dark secrets since I take clippy to be a serious and very scary threat (I hope you don't take too much offence clippy you are a wonderful poster) . I should have stated that too. But to be honest it would be much less fun knowing the evil soul crushing self-fulfilling prophecy (tm), the situation around it is hilarious. What really catches my attention however is the thought experiment of how exactly one is supposed to quarantine a very very dangerous idea. Since in the space of all possible ideas, I'm quite sure there are a few that could prove very toxic to humans. The LW member that take it seriously are doing a horrible job of it.
0NancyLebovitz
Upvoted for the cat picture.
0thomblake
Indeed, in the classic story, it was an idea whose time had come, and there was no effective means of quarantining it. And when it comes to ideas that have hit the light of day, there are always going to be those of us who hate censorship more than death.
4Airedale
I think such discussion wouldn't necessarily warrant its own top-level post, but I think it would fit well in a new Meta thread. I have been meaning to post such a thread for a while, since there are also a couple of meta topics I would like to discuss, but I haven't gotten around to it.
2Emile
I don't. Possible downsides are flame wars among people who support different types of moderation policies (and there are bound to be some - self-styled rebels who pride themselves in challenging the status quo and going against groupthink are not rare on the net), and I don't see any possible upsides. Having a Benevolent Dictator For Life works quite well. See this on Meatball Wiki, that has quite a few pages on organization of Online Communities.
[-]homunq100

I don't want a revolution, and don't believe I'll change the mind of somebody committed not to thinking too deeply about something. I just want some marginal changes.

I think Roko got a pretty clear explanation of why his post was deleted. I don't think I did. I think everyone should. I suspect there may be others like me.

I also think that there should be public ground rules as to what is safe. I think it is possible to state such rules so that they are relatively clear to anyone who has stepped past them, somewhat informative to those who haven't, and not particularly inviting of experimentation. I think that the presence of such ground rules would allow some discussion as to the danger or non-danger of the forbidden idea and/or as to the effectiveness or ineffectiveness of supressing it. Since I believe that the truth is "non-danger" and "ineffectiveness", and the truth will tend to win the argument over time, I think that would be a good thing.

3timtyler
The second rule of Less Wrong is, you DO NOT talk about Forbidden Topics.
0homunq
Your sarcasm would not be obvious if I didn't recognize your username.
0timtyler
Hmm - I added a link to the source, which hopefully helps to explain.
0homunq
Quotes can be used sarcastically or not.
0timtyler
I don't think I was being sarcastic. I won't take the juices out of the comment by analysing it too completely - but a good part of it was the joke of comparing Less Wrong with Fight Club. We can't tell you what materials are classified - that information is classified.
3Emile
It's probably better to solve this by private conversation with Eliezer, than by trying to drum up support in an open thread. Too much meta discussion is bad for a community.
1homunq
The thing I'm trying to drum up support for is an incremental change in current policy; for instance, a safe and useful version of the policy being publicly available. I believe that's possible, and I believe it is more appropriate to discuss this in public. (Actually, since I've been making noise about this, and since I've promised not to reveal it, I now know the secret. No, I won't tell you, I promised that. I won't even tell who told me, even though I didn't promise not to, because they'd just get too many requests to reveal it. But I can say that I don't believe in it, and also that I think [though others might disagree] that a public policy could be crafted which dealt with the issue without exacerbating it, even if it were real.)
0[anonymous]
How much evidence for the existence of a textual Langford Basilisk would you require before considering it a bad idea to write about it in detail?
1JGWeissman
Normally yes, but this case involves a potentially adversarial agent with intelligence and optimizing power vastly superior to your own, and which cares about your epistemic state as well as your actions.
5homunq
Look, my post addressed these issues, and I'd be happy to discuss them further, if the ground rules were clear. Right now, we're not having that discussion; we're talking about whether that discussion is desirable, and if so, how to make it possible. I think that the truth will out; if you're right, you'll probably win the discussion. So although we disagree on danger, we should agree on discussing danger within some well-defined ground rules which are comprehensibly summarized in some safe form.
-1wedrifid
Really? Go read the sequences! ;)
0[anonymous]
Hell? That's it?
1[anonymous]
Thanks. More reason to waste less time here. I have been reading OB and LW from about a month of OB's founding, but this site has been slipping for over a year now. I don't even know what specifically is being discussed; not even being able to mention the subject matter of the banned post, and having secret rules, is outstandingly stupid. Maybe I'll come back again in a bit to see if the "moderators" have grown up.
6NihilCredo
As a rather new reader, my impression has been that LW suffers from a moderate case of what in the less savory corners of the Internet would be known as CJS (circle-jerking syndrome). At the same time, if one is willing to play around this aspect (which is as easy as avoiding certain threads and comment trees), there are discussion possibilities that, to the best of my knowledge, are not matched anywhere else - specifically, the combination of a low effort-barrier to entry, a high average thought-to-post ratio, and a decent community size.
[-]Liron190

I made this site last month: areyou1in1000000.com

0Snowyowl
It seems that I am not one in a million. Pity.
0Erik
At least you're not alone.
0Oscar_Cunningham
Me neither. :(

Neuroskeptic's Help, I'm Being Regressed to the Mean is the clearest explanation of regression to the mean that I've seen so far.

9Snowyowl
Wow. I thought I understood regression to the mean already, but the "correlation between X and Y-X" is so much simpler and clearer than any explanation I could give.
2Vladimir_M
When I tried making sense of this topic in the context of the controversies over IQ heritability, the best reference I found was this old paper: Brian Mackenzie, Fallacious use of regression effects in the I.Q. controversy, Australian Psychologist 15(3):369-384, 1980 Unfortunately, the paper failed to achieve any significant impact, probably because it was published in a low-key journal long before Google, and it's now languishing in complete obscurity. I considered contacting the author to ask if it could be put for open access online -- it would be definitely worth it -- but I was unable to find any contact information; it seems like he retired long ago. There is also another paper with a pretty good exposition of this problem, which seems to be a minor classic, and is still cited occasionally: Lita Furby, Interpreting regression toward the mean in developmental research, Developmental Psychology, 8(2):172-179, 1973
[-]DSimon150

I'm interested in video game design and game design in general, and also in raising the rationality waterline. I'd like to combine these two interests: to create a rationality-focused game that is entertaining or interesting enough to become popular outside our clique, but that can also effectively teach a genuinely useful skill to players.

I imagine that it would consist of one or more problems which the player would have to be rational in some particular way to solve. The problem has to be:

  • Interesting: The prospect of having to tackle the problem should excite the player. Very abstract or dry problems would not work; very low-interaction problems wouldn't work either, even if cleverly presented (i.e. you could do Newcomb's problem as a game with plenty of lovely art and window dressing... but the game itself would still only be a single binary choice, which would quickly bore the player).

  • Dramatic in outcome: The difference between success and failure should be great. A problem in which being rational gets you 10 points but acting typically gets you 8 points would not work; the advantage of applying rationality needs to be very noticeable.

  • Not rigged (or not obviously so): The

... (read more)

RPGs (and roguelikes) can involve a lot of optimization/powergaming; the problem is that powergaming could be called rational already. You could

  • explicitly make optimization a part of game's storyline (as opposed to it being unnecessary (usually games want you to satisfice, not maximize) and in conflict with the story)
  • create some situations where the obvious rules-of-thumb (gather strongest items, etc.) don't apply - make the player shut up and multiply
  • create situations in which the real goal is not obvious (e. g. it seems like you should power up as always, but the best choice is to focus on something else)

Sorry if this isn't very fleshed-out, just a possible direction.

Here's an idea I've had for a while: Make it seem, at first, like a regular RPG, but here's the kicker -- the mystical, magic potions don't actually do anything that's indistinguishable from chance.

(For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you. If you think this would be too obvious, rot13: In the game Earthbound, bar vgrz lbh trg vf gur Pnfrl Wbarf ong, naq vgf fgngf fnl gung vg'f ernyyl cbjreshy, ohg vg pna gnxr lbh n ybat gvzr gb ernyvmr gung vg uvgf fb eneryl gb or hfryrff.)

Set it in an environment like 17th-century England where you have access to the chemicals and astronomical observations they did (but give them fake names to avoid tipping off users, e.g., metallia instead of mercury/quicksilver), and are in the presence of a lot of thinkers working off of astrological and alchemical theories. Some would suggest stupid experiments ("extract aurum from urine -- they're both yellow!") while others would have better ideas.

To advance, you have to figure out the laws governing these things (which would be isomorphic to real science) and put this knowledge to practical use. The insights that had to be made back then are far removed from the clean scientific laws we have now, so it would be tough.

It would take a lot of work to e.g. make it fun to discover how to use stars to navigate, but I'm sure it could be done.

For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you.

What if instead of being useless (by having an additional cancelling effect), magical potions etc. had no effect at all? If HP isn't explicitly stated, you can make the player feel like he's regaining health (e.g. by some visual cues), but in reality he'd die just as often.

9steven0461
I think in many types of game there's an implicit convention that they're only going to be fun if you follow the obvious strategies on auto-pilot and don't optimize too much or try to behave in ways that would make sense in the real world, and breaking this convention without explicitly labeling the game as competitive or a rationality test will mostly just be annoying. The idea of having a game resemble real-world science is a good one and not one that as far as I know has ever been done anywhere near as well as seems possible.
1SilasBarta
Good point. I guess the game's labeling system shouldn't deceive you like that, but it would need to have characters that promote non-functioning technology, after some warning that e.g. not everyone is reliable, that these people aren't the tutorial.
9DSimon
Best I think would be if the warning came implicitly as part of the game, and a little ways into it. For example: The player sees one NPC Alex warn another NPC Joe that failing to drink the Potion of Feather Fall will mean he's at risk of falling off a ledge and dying. Joe accepts the advice and drinks it. Soon after, Joe accidentally falls off a ledge and dies. Alex attempts to rationalize this result away, and (as subtly as possible) shrugs off any attempts by the player to follow conversational paths that would encourage testing the potion. Player hopefully then goes "Huh. I guess maybe I can't trust what NPCs say about potions" without feeling like the game has shoved the answer at them, or that the NPCs are unrealistically bad at figuring stuff out.
2SilasBarta
Exactly -- that's the kind of thing I had in mind: the player has to navigate through rationalizations and be able to throw out unreliable claims against bold attempts to protect it from being proven wrong. So is this game idea something feasible and which meets your criteria?
3DSimon
I think so, actually. When I start implementation, I'll probably use an Interactive Fiction engine as another person on this thread suggested, because (a) it makes implementation a lot easier and (b) I've enjoyed a lot of IF but I haven't ever made one of my own. That would imply removing a fair amount of the RPG-ness in your original suggestion, but the basic ideas would still stand. I'm also considering changing the setting to make it an alien world which just happens to be very much like 17th century England except filled with humorous Rubber Forehead Aliens; maybe the game could be called Standing On The Eyestalks Of Giants. On the particular criteria: * Interesting: I think the setting and the (hopefully generated) buzz would build enough initial interest to carry the player through the first frustrating parts where things don't seem to work as they are used to. Once they get the idea that they're playing as something like an alien Newton, that ought to push up the interest curve again a fair amount. * Not (too) allegorical: Everybody loves making fun of alchemists. Now that I think of it, though, maybe I want to make sure the game is still allegorical enough to modern-day issues so that it doesn't encourage hindsight bias. * Dramatic/Surprising: IF has some advantages here in that there's an expectation already in place that effects will be described with sentences instead of raw HP numbers and the like. It should be possible to hit the balance where being rational and figuring things out gets the player significant benefits (Dramatic) , but the broken theories being used by the alien alchemists and astrologists are convincing enough to fool the player at first into thinking certain issues are non-puzzles (Surprising). * Not rigged: Assuming the interface for modelling the game world's physics and doing experiments is sophisticated enough, this should prevent the feeling that the player can win by just finding the button marked "I Am Rational" and hit
1SilasBarta
Thanks, I'm glad I was able to give you the kind of idea you were looking for, and that someone is going to try to implement this idea. Good -- that's what I was trying to get at. For example, you would want a completely different night sky; you don't want the gamer to be able to spot the Big Dipper (or Southern Cross for our Aussie friends) and then be able to use existing ephemeris (ephemeral?) data. The planet should have a different tilt, or perhaps be the moon of another planet, so the player can't just say, "LOL, I know the heliocentric model, my planet is orbiting the sun, problem solved!" Different magnetic field too, so they can't just say, "lol, make a compass, it points north". I'm skeptical, though, about how well text-based IF can accomplish this -- the text-only interface is really constraining, and would have to tell the user all of the salient elements explicitly. I would be glad to help on the project in any way I can, though I'm still learning complex programming myself. Also, something to motivate the storyline would be like: You need to come up with better cannonballs for the navy (i.e. have to identify what increases a metal's yield energy). Or come up with a way of detecting counterfeit coins.
0Mass_Driver
Let me know if you would like help with the writing, either in terms of brainstorming, mapping the flow, or even just copyediting.
5CronoDAS
Or you could just go look up the correct answers on gamefaqs.com.
2JGWeissman
So the game should generate different sets of fake names for each time it is run, and have some variance in the forms of clues and which NPC's give them.
4CronoDAS
Ever played Nethack? ;)
0JGWeissman
Yes, a little, but I never really got into it. As I recall, Nethack didn't do what I suggest so much as not tell you what certain things are until you magically indentify them.
7DSimon
Well, there are other ways in NetHack to identify things besides the "identify" spell (which itself must be identified anyways). You can: * Try it out on yourself. This is often definitive, but also often dangerous. Say if you drink a potion, it might be a healing spell... or it might be poison... or it might be fruit juice. 1/3 chance of existential failure for a given experiment is crappy odds; knowledge isn't that valuable. * Get an enemy to try it. Intelligent enemies will often know the identies of scrolls and potions you aren't yet familiar with. Leaving a scroll or potion on the ground and seeing what the next dwarf that passes by does with it can be informative. * Try it out on an enemy. Potions can be shattered over an enemy's head instead of being drunk; this is safer than drinking it yourself, though you may not notice the effects as readily, and it's annoyingly easy to miss and just waste the potion on the wall behind the monster. * Various other methods that can at least narrow down the identification: have your pet walk on it to see if it's cursed, offer to sell it to to a shopkeep to get an idea of how valuable it is, dip things in unknown potions to see if some obvious effect (i.e. corrosion) occurs, scratch at the ground with unknown wands to see if sparks/flames are created and if so what kind, kick things to see if they are heavy or light, and so on and so on... The reason NetHack isn't already the Ideal Experimental Method Game is because once you learn what the right experiments are, you can just use them repeatedly each game; the qualitative differences between magical items are always the same, and it's just a matter of rematching label to effect for each new session. On the other hand, for newbie players, where the experimental process might be exciting and novel... well, usually they're too busy experiencing Yet Another Silly Death to play scientist thoroughly. Heck, a lot of the early deaths will be directly due to un-clever exper
8Alicorn
This reminds me of something I did in a D&D game once. My character found three unidentified cauldronsful of potions, so she caught three rats and dribbled a little of each on a different rat. One rat died, one turned to stone, and one had no obvious effects. (She kept the last rat and named it Lucky.)
0CronoDAS
Did you try using the two lethal potions as weapons?
2Alicorn
I didn't get ahold of vials that would shatter on impact before the game fizzled out (a notorious play-by-post problem). I did at one time get to use Lucky as a weapon, though. Sadly, my character was not proficient with rats.
3CronoDAS
It's a rat-flail!
4Alicorn
Nah, I used him as a thrown weapon. (He was fine and I retrieved him later.)
2Gunnar_Zarncke
Nethack as ML training environment: https://nethackchallenge.com/ 
1CronoDAS
Yes. That's why isn't quite the perfect solution: you can still look up a "cookbook" set of experiments to distinguish between Potion That Works and Potion That Will Get You Killed.
7Raemon
To be fair, in real life, it's perfectly okay that once you determine the right set of experiments to run to analyze a particular phenomena, you can usually use similar experiments to figure out similar phenomena. I'm less worried about infinite replay value and more worried about the game being fun the first time through.
2JGWeissman
Cookbook experiments will suffice if you are handed potions that may have a good effect or that may kill you. But if you have to figure out how to mix the potion yourself, this is much more difficult. Learning the cookbook experiments could be the equivalent of learning chemistry.
9Emile
Note also the Wiki page, with links to previous threads (I just discovered it, and I don't think I had noticed the previous threads. This one seems better!) One interesting game topic could be building an AI. Make it look like a nice and cutesy adventure game, with possibly some little puzzles, but once you flip the switch, if you didn't get absolutely everything exactly right, the universe is tiled with paperclips/siny smiley faces/tiny copies of Eliezer Yudkowsky. That's more about SIAI propaganda than rationality though. One interesting thing would be to exploit the conventions of video games but make actual winning require to see through those conventions. For example, have a score, and certain actions give you points, with nice shiny feedbacks and satisfying "shling!" sounds, but some actions are vitally important but not rewarded by any feedback. For example (to keep in the "build an AI" example), say you can hire scientists, and the scientists' profile page lists plenty of impressive certifications (stats like "experiment design", "analysis", "public speaking", etc.), and some filler text about what they did their thesis and boring stuff like that (think: stats get big Icons, and are at the top, filler text looks like boring background filler text). And once you hired the scientists, you get various bonuses (money, prestige points, experiments), but the only of those factors that's of any importance at the end of the game is whether the scientist is "not stupid", and the only way to tell that is from various tell-tale signs for "stupid" in the "boring" filler texts - For example things like (also) having a degree in theology, or having published a paper on homeopathy ... stuff that would indeed be a bad sign for a scientist, but that nothing in the game ever tells you is bad. So basically the idea would be that the rules of the game you're really playing wouldn't be the ones you would think at first glance, which is a pretty good metaphor for real life to
4DSimon
I think this is a great idea. Gamers know lots of things about video games, and they know them very thoroughly. They're used to games that follow these conventions, and they're also (lately) used to games that deliberately avert or meta-comment on these conventions for effect (i.e. Achievement Unlocked), but there aren't too many games I know of that set up convincingly normal conventions only to reveal that the player's understanding is flawed. Eternal Darkness did a few things in this area. For example, if your character's sanity level was low, you the player might start having unexpected troubles with the interface, i.e. the game would refuse to save on the grounds that "It's not safe to save here", the game would pretend that it was just a demo of the full game, the game would try to convince you that you accidentally muted the television (though the screaming sound effects would still continue), and so on. It's too bad that those effects, fun as they were, were (a) very strongly telegraphed beforehand, and (b) used only for momentary hallucinations, not to indicate that the original understanding the player had was actually the incorrect one.
[-]Raemon210

The problem is that, simply put, such games generally fail on the "fun" meter.

There is a game called "The Void," which begins with the player dying and going to a limbo like place ("The Void"). The game basically consists of you learning the rules of the Void and figuring out how to survive. At first it looks like a first person shooter, but if you play it as a first person shooter you will lose. Then it sort of looks like an RPG. If you play it as an RPG you will also lose. Then you realize it's a horror game. Which is true. But knowing that doesn't actually help you to win. What you eventually have to realize is that it's a First Person Resource Management game. Like, you're playing StarCraft from first person as a worker unit. Sort of.

The world has a very limited resource (Colour) and you must harvest, invest and utilitize Colour to solve all your problems. If you waste any, you will probably die, but you won't realize that for hours after you made the initial mistake.

Every NPC in the game will tell you things about how the world works, and every one of those NPCs (including your initial tutorial) is lying to you about at least one thing.

The game i... (read more)

2Emile
Huh, sounds very interesting! So my awesome game concept would give rise to a lame game, eh? *updates* I hadn't heard of that game, I might try it out. I'm actually surprised a game like that was made and commercially published.
5Raemon
It's a good game, just with a very narrow target audience. (This site is probably a good place to find players who will get something out of it, since you have higher than average percentages of people willing to take a lot of time to think about and explore a cerebral game). Some specific lessons I'd draw from that game and apply here: 1. Don't penalize failure too hard. The Void's single biggest issue (for me) is that even when you know what you're doing you'll need to experiment and every failure ends with death (often hours after the failure). I reached a point where every time I made even a minor failure I immediately loaded a saved game. If the purpose is to experiment, build the experimentation into the game so you can try again without much penalty (or make the penalty something that is merely psychological instead of an actual hampering of your ability to play the game.) 2. Don't expect players to figure things out without help. There's a difference between a game that teaches people to be rational and a game that simply causes non-rational people to quit in frustration. Whenever there's a rational technique you want people to use, spell it out. Clearly. Over and over (because they'll miss it the first time). The Void actually spells out everything as best they can, but the game still drives players away because the mechanics are simply unlike any other game out there. Most games rely on an extensive vocabulary of skills that players have built up over years, and thus each instruction only needs to be repeated once to remind you of what you're supposed to be doing. The Void repeats instructions maybe once or twice, and it simply isn't enough to clarify what's actually going on. (The thing where NPCs lie to you isn't even relevant till the second half of the game. By the time you get to that part you've either accepted how weird the game is or you've quit already). My sense is that the best approach would be to start with a relatively normal (mechanic
3NihilCredo
It was made by a Russian developer which is better known for its previous effort, Pathologic, a somewhat more classical first-person adventure game (albeit very weird and beautiful, with artistic echoes from Brecht to Dostoevskij), but with a similar problem of being murderously hard and deceptive - starving to death is quite common. Nevertheless, in Russia Pathologic had acceptable sales and excellent critical reviews, which is why Ice-Pick Lodge could go on with a second project.
2PeerInfinity
"once you flip the switch, if you didn't get absolutely everything exactly right, the universe is tiled with paperclips/tiny smiley faces/tiny copies of Eliezer Yudkowsky." See also: The Friendly AI Critical Failure Table And I think all of the other suggestions you made in this comment would make an awesome game! :D
0Emile
Ooh, I had forgot about that table - Gurps Friendly AI is also of interest.
1Emile
Riffing off my weird biology / chemistry thing: a game based on the breeding of weird creatures, by humans freshly arrived on the planet (add some dimensional travel if you want to justify weird chemistry - I'm thinking of Tryslmaistan. The catch is (spoiler warning!), the humans got the wrong rules for creature breeding, and some plantcrystalthingy they think is the creatures' food is actually part of their reproduction cycle, where some essential "genetic" information passes. And most of the things that look like in-game help and tutorials are actually wrong, and based on a model that's more complicated than the real one (it's just a model that's closer to earth biology).
6khafra
I'm not sure if transformice counts as a rationalist game, but appears to be a bunch of multiplayer coordination problems, and the results seem to support ciphergoth's conjecture on intelligence levels.
3Emile
Transformice is awesome :D A game hasn't made me laugh that much for a long time. And it's about interesting, human things, like crowd behaviour and trusting the "leader" and being thrust in a position of responsibility without really knowing what to do ... oh, and everybody dying in funny ways.
3Perplexed
One way to achieve this is to make it a level-based puzzle game. Solve the puzzle suboptimally, and you don't get to move on. Of course, that means that you may need special-purpose programming at each level. On the other hand, you can release levels 1-5 as freeware, levels 6-20 as Product 1.0, and levels 21-30 as Product 2.0. The puzzles I am thinking of are in the field of game theory, so the strategies will include things like not cooperating (because you don't need to in this case), making and following through on threats, and similar "immoral" actions. Some people might object on ethical or political grounds. I don't really know how to answer except to point out that at least it is not a first-person shooter. Game theory includes many surprising lessons - particularly things like the handicap principle, voluntary surrender of power, rational threats, and mechanism design. Coalition games are particularly counter-intuitive, but, with experience, intuitively understandable. But you can even teach some rationality lessons before getting into games proper. Learn to recognize individuals, for example. Not all cat-creatures you encounter are the same character. You can do several problems involving probabilities and inference before the second player ever shows up.
3steven0461
Text adventures seem suitable for this sort of thing, and are relatively easy to write. They're probably not as good for mass appeal, but might be OK for mass nerd appeal. For these purposes, though, I'm worried that rationality may be too much of a suitcase term, consisting of very different groups of subskills that go well with very different kinds of game.
0CronoDAS
Another thing that's relatively easy to create is a Neverwinter Nights module, but you're pretty much stuck with the D&D mechanics if you go that route.
0Oscar_Cunningham
One idea I'd like to suggest would be a game where the effectiveness of the items a player has changes randomly hour by hour. Maybe a MMO with players competing against each other, so that they can communicate information about which items are effective. Introduce new items with weird effects every so often so that players have to keep an eye on their long term strategy as well.
2DSimon
I think a major problem with that is that most players would simply rely upon the word on the street to tell them what was currently effective, rather than performing experiments themselves. Furthermore, changes in only "effectiveness" would probably be too easy to discover using a "cookbook" of experiments (see the NetHack discussion in this thread).
1Oscar_Cunningham
I'm thinking that the parameters should change just quickly enough to stop consensus forming (maybe it could be driven by negative feedback, so that once enough people are playing one strategy it becomes ineffective). Make using a cookbook expensive. Winning should be difficult, and only just the right combination will succeed.
2DSimon
I think this makes sense, but can you go into more detail about this: I didn't mean a cookbook as an in-game item (I'm not sure if that's what you were implying...), I meant the term to mean a set of well-known experiments which can simply be re-ran every time new results are required. If the game can be reduced to that state, then a lot of its value as a rationality teaching tool (and also as an interesting game, to me at least) is lost. How can we force the player to have to come up with new ideas for experiments, and see some of those ideas fail in subtle ways that require insight to understand? My tendency is to want to solve this problem by just making a short game, so that there's no need to figure out how to create a whole new, interesting experimental space for each session. This would be problematic in an MMO, where replayablity is expected (though there have been some interesting exceptions, like Uru).
3Oscar_Cunningham
Ah, I meant: "Make each item valuable enough that using several just to work out how effective each one is would be a fatal mistake" Instead you would have to keep track of how effective each one was, or watch the other players for hints.
0taryneast
Hmmm - changing things frequently means you'll have some negative knock-on effects. You'll be penalising anybody that doesn't game as often - eg people with a life. You stand a chance of alienating a large percentage of the audience, which is not a good idea.

I'm a translator between people who speak the same language, but don't communicate.

People who act mostly based on their instincts and emotions, and those who prefer to ignore or squelch those instincts and emotions[1], tend to have difficulty having meaningful conversations with each other. It's not uncommon for people from these groups to end up in relationships with each other, or at least working or socializing together.

On the spectrum between the two extremes, I am very close to the center. I have an easier time understanding the people on each side than their counterparts do, it frustrates me when they miscommunicate, and I want to help. This includes general techniques (although there are some good books on that already), explanations of words or actions which don't appear to make sense, and occasional outright translation of phrases ("When they said X, they meant what you would have called Y").

Is this problem, or this skill, something of interest to the LW community at large? In the several days I've been here it's come up on comment threads a couple times. I have some notes on the subject, and it would be useful for me to get feedback on them; I'd like to some day... (read more)

4beriukay
One issue I've frequently stumbled across is the people who make claims that they have never truly considered. When I ask for more information, point out obvious (to me) counterexamples, or ask them to explain why they believe it, they get defensive and in some cases quite offended. Some don't want to ever talk about issues because they feel like talking about their beliefs with me is like being subject to some kind of Inquisition. It seems to me that people of this cut believe that to show you care about someone, you should accept anything they say with complete credulity. Have you found good ways to get people to think about what they believe without making them defensive? Do I just have to couch all my responses in fuzzy words? Using weasel words always seemed disingenuous to me, but if I can get someone to actually consider the opposition by saying things like "Idunno, I'm just saying it seems to me, and I might be wrong, that maybe gays are people and deserve all the rights that people get, you know what I'm saying?"

I've been on the other side of this, so I definitely understand why people react that way--now let's see if I understand it well enough to explain it.

For most people, being willing to answer a question or identify a belief is not the same thing as wanting to debate it. If you ask them to tell you one of their beliefs and then immediately try to engage them in justifying it to you, they feel baited and switched into a conflict situation, when they thought they were having a cooperative conversation. You've asked them to defend something very personal, and then are acting surprised when they get defensive.

Keep in mind also that most of the time in our culture, when one person challenges another one's beliefs, it carries the message "your beliefs are wrong." Even if you don't state that outright--and even in the probably rare cases when the other person knows you well enough to understand that isn't your intent--you're hitting all kinds of emotional buttons which make you seem like an aggressor. This is the result of how the other person is wired, but if you want to be able to have this kind of conversation, it's in your interest to work with it.

The corollary to the implied ... (read more)

2Morendil
Yes please. Does the term "bridger" ring a bell for you? (It's from Greg Egan's Diaspora, in case it doesn't, and you'd have to read it to get why I think that would be an apt name for what you're describing.)
0Relsqui
It doesn't, and I haven't, although I can infer at least a little from the term itself. Your call if you want to try and explain it or wait for me to remember, find a library that has it, acquire it, and read it before understanding. ;) Is there any specific subject under that umbrella which you'd like addressed? Narrowing the focus will help me actually put something together.
0Morendil
The Wikipedia page explains a little about Bridgers. I'm afraid if I knew how to narrow this topic down I'd probably be writing it up myself. :)
0Relsqui
Hmm. I'm wary of the analogy to separate species; humans treat each other enough like aliens as it is. But so noted, thank you.
1Rain
I wanted to say thank you for providing these services. I like performing the same translations, but it appears I'm unable to be effective in a text medium, requiring immediate feedback, body language, etc. When I saw some of your posts on old articles, apparently just as you arrived, I thought to myself that you would genuinely improve this place in ways that I've been thinking were essential.
1Relsqui
Thanks! That's actually really reassuring; that kind of communication can be draining (a lot of people here communicate naturally in a way which takes some work for me to interpret as intended). It is good to hear that it seems to be doing some good.

[tl;dr: quest for some specific cryo data references]

I prepare to do my own, deeper evaluation of cryogenics. For that I read through many of the case reports on the Alcor and CI page. Due to my geographic situation I am particularly interested in the ability of actually getting a body from Europe, Germany over to their respective facilities. Now the reports are quite interesting and provide lots of insight into the process, but what I still look for are the unsuccessful reports. In which cases a signed up member was not brought in due to legal interference, next of kin decisions and the likes. Is anyone aware of a detailed log of those? Also I would like to see how many of the signed clients get lost due to the circumstances of their death.

0Document
Can't help with your question, but speaking of Europe....

I want to write a post about an... emotion, or pattern of looking at the world, that I have found rather harmful to my rationality in the past. The closest thing I've found is 'indignation', defined at Wiktionary as "An anger aroused by something perceived as an indignity, notably an offense or injustice." The thing is, I wouldn't consider the emotion I feel to be 'anger'. It's more like 'the feeling of injustice' in its own right, without the anger part. Frustration, maybe. Is there a word that means 'frustration aroused by a perceived indignity, notably an offense or injustice'? Like, perhaps the emotion you may feel when you think about how pretty much no one in the world or no one you talk to seems to care about existential risks. Not that you should feel the emotion, or whatever it is, that I'm trying to describe -- in the post I'll argue that you should try not to -- but perhaps there is a name for it? Anyone have any ideas? Should I just use 'indignation' and then define what I mean in the first few sentences? Should I use 'adjective indignation'? If so, which adjective? Thanks for any input.

9Airedale
The words righteous indignation in combination are sufficiently well-recognized as to have their own wikipedia page. The page also says that righteous indignation has overtones of religiosity, which seems like a reason not to use it in your sense . It also says that it is akin to a "sense of injustice," but at least for me, that phrase doesn't have as much resonance. Edited to add this possibly relevant/interesting link I came across, where David Brin describes self-righteous indignation as addictive.
6Perplexed
Strikes me as exactly the reason you should use it. What you are describing is indignation, it is righteous, and it is counterproductive in both rationalists and less rational folks for pretty much the same reasons.
0Airedale
I meant that the religious connotations might not be a reason to use the term if Will is trying to come up with the most accurate term for what he’s describing. To the extent the term is tied up in Christianity, it may not convey meaning in the way Will wants – although the more Will explains how he is using the term, the less problematic this would be. And I agree that what you say suggests an interesting way that Will can appropriate a religious term and make some interesting compare-and-contrast type points.
6jimrandomh
I noticed this emotion cropping up a lot when I read Reddit, and stopped reading it for that reason. It's too easy to, for example, feel outraged over a video of police brutality, but not notice that it was years ago and in another state and already resolved.
6Eliezer Yudkowsky
Sounds related to the failure class I call "living in the should-universe".
3Will_Newsome
It seems to be a pretty common and easily corrected failure mode. Maybe you could write a post about it? I'm sure you have lots of useful cached thoughts on the matter. Added: Ah, I'd thought you'd just talked about it at LW meetups, but a Google search reveals that the theme is also in Above-Average AI Scientists and Points of Departure.
5[anonymous]
Righteous indignation is a good word for it. I, personally, see it as one of the emotional capacities of a healthy person. Kind of like lust. It can be misused, it can be a big time-waster if you let it occupy your whole life, but it's basically a sign that you have enough energy. If it goes away altogether, something may be wrong. I had a period a few years ago of something like anhedonia. The thing is, I also couldn't experience righteous indignation, or nervous worry, or ordinary irritability. It was incredibly satisfying to get them back. I'm not a psychologist at all, but I think of joy, anger, and worry (and lust) as emotions that require energy. The miserably lethargic can't manage them. So that's my interpretation and very modest defense of righteous indignation. It's not a very practical emotion, but it is a way of engaging personally with the world. It motivates you in the minimal way of making you awake, alert, and focused on something. The absence of such engagement is pretty horrible.
5komponisto
Interestingly enough, this sounds like the emotion that (finally) induced me to overcome akrasia and write a post on LW for the first time, which initiated what has thus far been my greatest period of development as a rationalist. It's almost as if this feeling is to me what plain anger is to Harry Potter(-Evans-Verres): something which makes everything seem suddenly clearer. It just goes to show how difficult the art of rationality is: the same technique that helps one person may hinder another.
4wedrifid
That could work well when backed up by with the description of just what you will be using the term to mean. I will be interested to read your post - from your brief introduction here I think I have had similar observations about emotions that interfere with thought, independent of raw overwhelm from primitives like anger.
3[anonymous]
I've seen "moral indignation," which might fit (though I think "indignation" still implies anger). I've also heard people who feel that way describe the object of their feelings as "disgusting" or "offensive," so you could call it "disgust" or "being offended." Of course, those people also seemed angry. Maybe the non-angry version would be called "bitterness." As soon as I wrote the paragraph above, I felt sure that I'd heard "moral disgust" before. I googled it and the second link was this. I don't know about the quality of the study, but you could use the term.
3David_Allen
In myself, I have labeled the rationality blocking emotion/behavior as defensiveness. When I am feeling defensive, I am less willing to see the world as it is. I bind myself to my context and it is very difficult for me to reach out and establish connections to others. I am also interested in ideas related to rationality and the human condition. Not just about the biases that arise from our nature, but about approaches to rationality that work from within our human nature. I have started an analysis of Buddhism from this perspective. At its core (ignoring the obvious mysticism), I see sort of a how-to guide for managing the human condition. If we are to be rational we need to be willing to see the world as it is, not as we want it to be.
0[anonymous]
outrage?
-2SilasBarta
Pardon the self-promotion, but that sounds like the feeling of recognizing a SAMEL, i.e. that there is some otherwise-ungrounded inherent deservedness of something in the world. (SAMEL = subjuctive acausal means-end link, elaborated in article)

In the spirit of "the world is mad" and for practical use, NYT has an article titled Forget what you know about good study habits.

2Matt_Simpson
Something I learned myself that the article supported: taking tests increases retention Something I learned from the article: varying study location increases retention.
[-]matt100

Singularity Summit AU
Melbourne, Australia
September 7, 11, 12 2010

More information including speakers at http://summit.singinst.org.au.
Register here.

1wedrifid
Wow. Next Tuesday and in my hometown! Nice.
0meta_ark
Sigh... I would consider flying down from Sydney to go to it, but sadly I'm in a show that whole week and have to miss out entirely. Ah well. Hopefully they'll have the audio online, but I would have loved to mingle with people who share my worldview.
-10Clippy

I just discovered (when looking for a comment about an Ursula Vernon essay) that the site search doesn't work for comments which are under a "continue this thread" link. This makes site search a lot less useful, and I'm wondering if that's a cause of other failed searches I've attempted here.

2jimmy
I've noticed this too. There's no easy way to 'unfold all' is there?

The key to persuasion or manipulation is plausible appeal to desire. The plausibility can be pretty damned low if the desire is strong enough.

I participated in a survey directed at atheists some time ago, and the report has come out. They didn't mention me by name, but they referenced me on their 15th endnote, which regarded questions they said were spiritual in nature. Specifically, the question was whether we believe in the possibility of human minds existing outside of our bodies. From the way they worded it, apparently I was one of the few not-spiritual people who believed there were perfectly naturalistic mechanisms for separating consciousness from bodies.

I'm taking a grad level stat class. One of my classmates said something today that nearly made me jump up and loudly declare that he was a frequentist scumbag.

We were asked to show that a coin toss fit the criteria of some theorem that talked about mapping subsets of a sigma algebra to form a well-defined probability. Half the elements of the set were taken care of by default (the whole set S and its complement { }), but we couldn't make any claims about the probability of getting Heads or Tails from just the theorem. I was content to assume the coin wa... (read more)

I just listened to Robin Hanson's pale blue dot interview. It sounds like he focuses more on motives than I do.

Yes, if you give most/all people a list of biases, they will use it less like a list of potential pitfalls and more like a list of accusations. Yes, most, if not all, aren't perfect truth-seekers for reasons that make evolutionary sense.

But I wouldn't mind living in a society where using biases/logical fallacies results in a loss of status. You don't have to be a truth-seeker to want to seem like a truth-seeker. Striving to overcome bias still see... (read more)

The journalistic version:

[T]hose who abstain from alcohol tend to be from lower socioeconomic classes, since drinking can be expensive. And people of lower socioeconomic status have more life stressors [...] But even after controlling for nearly all imaginable variables - socioeconomic status, level of physical activity, number of close friends, quality of social support and so on - the researchers (a six-member team led by psychologist Charles Holahan of the University of Texas at Austin) found that over a 20-year period, mortality rates were highest fo

... (read more)
5Vladimir_M
The study looks at people over 55 years of age. It is possible that there is some sort of selection effect going on -- maybe decades of heavy drinking will weed out all but the most alcohol-resistant individuals, so that those who are still drinking heavily at 55-60 without ever having been harmed by it are mostly immune to the doses they're taking. From what I see, the study controls for past "problem drinking" (which they don't define precisely), but not for people who drank heavily without developing a drinking problem, but couldn't handle it any more after some point and decided themselves to cut back. Also, it should be noted that papers of this sort use pretty conservative definitions of "heavy drinking." In this paper, it's defined as more than 42 grams of alcohol per day, which amounts to about a liter of beer or three small glasses of wine. While this level of drinking would surely be risky for people who are exceptionally alcohol-intolerant or prone to alcoholism, lots of people can handle it without any problems at all. It would be interesting to see a similar study that would make a finer distinction between different levels of "heavy" drinking.
4cousin_it
These are fine conclusions to live by, as long as moderate drinking doesn't lead you to heavy drinking, cirrhosis and the grave. Come visit Russia to take a look.
1Vladimir_M
The discussion of the same paper on Overcoming Bias has reminded me of another striking correlation I read about recently: http://www.marginalrevolution.com/marginalrevolution/2010/07/beer-makes-bud-wiser.html It seems that for whatever reason, abstinence does correlate with lower performance on at least some tests of mental ability. The question is whether the controls in the study cover all the variables through which these lower abilities might have manifested themselves in practice; to me it seems quite plausible that the answer could be no.
2Morendil
A hypothesis: drinking is social, and enjoying others' company plays a role in survival (perhaps in learning too?).
0jimrandomh
That's very interesting, but I'm not sure I trust the article's statistics, and I don't have access to the full text. Could someone take a closer look and confirm that there are no shennanigans going on?

I'm writing a post on systems to govern resource allocation, is anyone interested in having any input into it or just proof reading it?

This is the intro/summary:

How do we know what we know? This is an important question, however there is another question which in some ways is more fundamental, why did we choose to devote resources to knowing those things in the first place?

As a physical entity the production of knowledge take resources that could be used for other things, so the problem expands to how to use resources in general. This I'll call the resou

... (read more)
7Snowyowl
This sounds interesting and relevant. Here's my input: I read this back in 2008 and I am summarising it from memory, so I may make a few factual errors. But I read that one of the problem facing large Internet companies like Google is the size of their server farms, which need cooling, power, space, etc. Optimising the algorithms used can help enormously. A particular program was responsible for allocating system resources so that the systems which were operating were operating at near full capacity, and the rest could be powered down to save energy. Unfortunately, this program was executed many times a second, to the point where the savings it created were much less than the power it used. The fix was simply to execute it less often. Running the program took about the same amount of time no mater how many inefficiencies it detected, so it was not worth checking the entire system for new problems if you only expected to find one or two. My point: To reduce resources spent on decision-making, make bigger decisions but make them less often. Small problems can be ignored fairly safely, and they may be rendered irrelevant once you solve the big ones.
4Oscar_Cunningham
I was having similar thoughts the other day while watching a reality TV show where designers competed for a job from Philippe Starck. Some of them spent ages trying to think of a suitable project, and then didn't have enough time to complete it; some of them launched into the first plan they had and it turned out rubbish. Clearly they needed some meta-planning. But how much? Well, they'll need to do some meta-meta planning... I'd be happy to give your post a read through. ETA: The buck stops immediately, of course.
1xamdam
Upvoted for importance of subject - looking forward to the post. Have you read up on Information Foraging?
0whpearson
I'm going to be discussing the organisational design level, rather than a strategic or tactical level of resource management.

In "The Shallows", Nicholas Carr makes a very good argument that replacing deep reading books, with the necessarily shallower reading online or of hypertext in general, causes changes in our brains which makes deep thinking harder and less effective.

Thinking about "The Shallows" later, I realized that laziness and other avoidance behaviors will also tend to become ingrained in your brain, at the expense of your self-direction/self-discipline behaviors they are replacing.

Another problem with the Web, that wasn't discussed in "The Sh... (read more)

9PhilGoetz
I haven't read Nicholas Carr, but I've seen summaries of some of the studies used to claim that book reading results in more comprehension than hypertext reading. All the ones I saw are bogus. They all use, for the hypertext reading, a linear extract from a book, broken up into sections separated by links. Sometimes the links are placed in somewhat arbitrary places. Of course a linear text can be read more easily linearly. I believe hypertext reading is deeper, and that this is obvious, almost true by definition. Non-hypertext reading is exactly 1 layer deep. Hypertext lets the reader go deeper. Literally. You can zoom in on any topic. A more fair test would be to give students a topic to study, with the same material, but some given books, and some given the book material organized and indexed in a competent way as hypertext. Hypertext reading lets you find your own connections, and lets you find background knowledge that would otherwise simply be edited out of a book.
8allenwang
It seems to me that the main reason most hypertext sources seem to produce shallower reading is not the fact that it contains hypertext itself, but that the barriers of publication are so low that the quality of most written work online is usually much lower than printed material. For example, this post is something that I might have spent 3 minutes thinking about before posting, whereas a printed publication would have much more time to mature and also many more filters such as publishers to take out the noise. It is more likely that book reading seems more deep because the quality is better. Also, it wouldn't be difficult to test this hypothesis with print and online newspaper since they both contain the same material.

It seems to me like "books are slower to produce than online material, so they're higher quality" would belong to the class of statements that are true on average but close to meaningless in practice. There's enormous variance in the quality of both digital and printed texts, and whether you absorb more good or bad material depends more on which digital/print sources you seek out than on whether you prefer digital or print sources overall.

1SilasBarta
Agree completely. While most of what's on the internet is low-quality, it's easy to find the domains of reliably high-quality thought. I've long felt that I get more intellectual stimulation from a day of reading blogs than I've gotten from a lifetime of reading printed periodicals.
0zero_call
It's not that books take longer to produce, it's that books just tend to have higher quality, and a corollary of that is that they frequently take longer to produce. Personally I feel fairly certain that the average quality of my online reading is substantially lower than offline reading.
4xamdam
It has deeper structure, but that is not necessarily user-friendly. A great textbook will have different levels of explanation, an author-designed depth-diving experience. Depending on author, material, you and the local wikipedia quality that might be a better or worse learning experience. Yep, definitely a benefit, but not without a trade-off. Often a good author will set you up with connections better than you can.
0PhilGoetz
But not better than a good hypertext author can.
0xamdam
If the hypertext is intentionally written as a book, which is generally not the case.
3jacob_cannell
I like allenwang's reply below, but there is another consideration with books. Long before hyperlinks, books evolved comprehensive indices and references, and these allow humans to relatively easily and quickly jump between topics in one book and across books. Now are the jumps we employ on the web faster? Certainly. But the difference is only quantitative, not qualitative, and the web version isn't enormously faster.
0zero_call
Hypertext reading has a strong potential, but it also has negative aspects that you don't have as much with standard books. For example, it's much easier to get distracted or side-tracked with a lot of secondary information that might not even be very important.
3JohnDavidBustard
It is very difficult to distinguish rationalisations of the discomfort of change, with actual consequences. If this belief that hypertext leads to a less sophisticated understanding than reading a book, what behaviour would change that could be measured?

Can anyone suggest any blogs giving advice for serious romantic relationships? I think a lot of my problems come from a poor theory of mind for my partner, so stuff like 5 love languages and stuff on attachment styles has been useful.

Thanks.

6Relsqui
I have two suggestions, which are not so much about romantic relationships as they are about communicating clearly; given your example and the comments below, though, I think they're the kind of thing you're looking for. The Usual Error is a free ebook (or nonfree dead-tree book) about common communication errors and how to avoid them. (The "usual error" of the title is assuming by default that other people are wired like you--basically the same as the typical psyche fallacy. It has a blog as well, although it doesn't seem to be updated much; my recommendation is for the book. If you're a fan of the direct practical style of something like LW, steel yourself for a bit of touchy-feeliness in UE, but I've found the actual advice very useful. In particular, the page about the biochemistry of anger has been really helpful for me in recognizing when and why my emotional response is out of whack with the reality of the situation, and not just that I should back off and cool down, but why it helps to do so. I can give you an example of how this has been useful for me if you like, but I expect you can imagine. A related book I'm a big fan of is Nonviolent Communication (no link because its website isn't of any particular use; you can find it at your favorite book purveyor or library). Again, the style is a bit cloying, but the advice is sound. What this book does is lay out an algorithm for talking about how you feel and what you need in a situation of conflict with another person (where "conflict" ranges from "you hurt my feelings" to gang war). I think it's noteworthy that following the NVC algorithm is difficult. It requires finding specific words to describe emotions, phrasing them in a very particular way, connecting them to a real need, and making a specific, positive, productive request for something to change. For people who are accustomed to expressing an idea by using the first words which occur to them to do so (almost everyone), this requires flexing mental
1eugman
Both look rather useful, thanks for the suggestions. Also, Google Books has Nonviolent Communication.
0Relsqui
You're welcome, and thanks--that's good to know. I'll bookmark it for when it comes up again.
0pjeby
I rather liked the page about how we're made of meat. Thanks for the cool link!
1Relsqui
You're welcome! Glad you like it. I'm a fan of that particular page as well--it's probably the technique I refer to/think about explicitly from that book second most, after the usual error itself. It's valuable to be able to separate the utility of hearing something to gain knowledge and that of hearing something you already know to gain reassurance--it just bypasses a whole bunch of defensiveness, misunderstanding, or insecurity that doesn't need to be there.
1RHollerith
I could point to some blogs whose advice seems good to me, but I won't because I think I can help you best by pointing only to material (alas no blogs though) that has actually helped me in a serious relationship -- there being a huge difference in quality between advice of the form "this seems true to me" and advice of the form "this actually helped me". What has helped me more in my relationships than any other information has is the non-speculative parts of the consensus among evolutionary psychologists on sexuality because they provide a vocabulary for me to express hypotheses (about particular situations I was facing) and a way for me to winnow the field of prospective hypotheses and bits of advice I get online from which I choose hypotheses and bits of advice to test. In other words, ev psy allows me to dismiss many ideas so that I do not incur the expense of testing them. I needed a lot of free time however to master that material. Probably the best way to acquire the material is to read the chapters on sex in Robert Wright's Moral Animal. I read that book slowly and carefully over 12 months or so, and it was definitely worth the time and energy. Well, actually the material in Moral Animal on friendship (reciprocal altruism) is very much applicable to serious relationships, too, and the stuff on sex and friendship together form about half the book. Before I decided to master basic evolutionary psychology in 2000, the advice that helped me the most was from John Gray, author of Men Are From Mars, Women Are From Venus. Analytic types will mistrust author and speaker John Gray because he is glib and charismatic (the Maharishi or such who founded Transcendental Meditation once offered to make Gray his successor and the inheritor of his organization) but his pre-year-2000 advice is an accurate map of reality IMHO. (I probably only skimmed Mars and Venus, but I watched long televised lectures on public broadcasting that probably covered the same material.)
-4Violet
Do you really need a "theory of mind" for that? Our partners are not a foreign species. Communicate lots in an open and honest manner with hir and try to understand what makes that particular person click.
8JoshuaZ
Yes, you do. Many people who have highly developed theories of mind seem to underestimate how much unconscious processing they are doing that is profoundly difficult for people to do who don't have as developed theories of mind. People who are mildly on the autism spectrum in particular (generally below the threshold of diagnosis) often have a lot of difficulty with this sort of unconscious processing but can if given a lot of explicit rules or heuristics do a much better job.
0eugman
Thank you. I believe I may fall in this category. I am highly quantitative and analytical, often to my detriment.
0eugman
Yes. You are assuming ze has a high level of introspection which would facilitate communication. This isn't always the case.

Relevant to our akrasia articles:

If obese individuals have time-inconsistent preferences then commitment mechanisms, such as personal gambles, should help them restrain their short-term impulses and lose weight. Correspondence with the bettors confirms that this is their primary motivation. However, it appears that the bettors in our sample are not particularly skilled at choosing effective commitment mechanisms. Despite payoffs of as high as $7350, approximately 80% of people who spend money to bet on their own behaviour end up losing their bets.

http... (read more)

1Sniffnoy
I recall someone claiming here earlier that they could do anything if they bet they could, though I can't find it right now. Useful to have some more explicit evidence about that.

This is perhaps a bit facetious, but I propose we try to contact Alice Taticchi (Miss World Italy 2009) and introduce her to LW. Reason? She cited she'd "bring without any doubt my rationality", among other things, when asked what qualities she would bring to the competition.

I have argued in various places that self-deception is not an adaptation evolved by natural selection to serve some function. Rather, I have said self-deception is a spandrel, which means it’s a structural byproduct of other features of the human organism. My view has been that features of mind that are necessary for rational cognition in a finite being with urgent needs yield a capacity for self-deception as a byproduct. On this view, self-deception wasn’t selected for, but it also couldn’t be selected out, on pain of losing some of the beneficial featur

... (read more)

Anyone here working as a quant in the finance industry, and have advice for people thinking about going into the field?

4kim0
I am, and I am planning to leave it to get a higher more average pay. From my viewpoint, it is terribly overrated and undervalued.
6Daniel_Burfoot
Can you expand on this? Do you think your experience is typical?
3kim0
Most places I have worked, the reputation of the job has been quite different from the actual job. I have compared my experiences with those of friends and colleagues, and they are relatively similar. Having a M.Sc. in physics and lots of programming experience made it possible for me to have more different kinds of engineering jobs, and thus more varied experience. My conclusion is that the anthropic principle holds for me in the work place, so that each time I experience Dilbertesque situations, they are representative of typical work situations. So yes, I do think my work situation is typical. My current job doing statistical analysis for stock analysts pay $ 73 000, while the average pay elsewhere is $ 120 000.
4xamdam
Ping Arthur breitman fb or linked in. He is part of NYC lw meetup, and a quant at goldman.

In light of the news that apparently someone or something is hacking into automated factory control systems, I would like to suggest that the apocalypse threat level be increased from Guarded (lots of curious programmers own fast computers) to Elevated (deeply nonconclusive evidence consistent with a hard takeoff actively in progress).

5jimrandomh
It looks a little odd for a hard takeoff scenario - it seems to be prevalent only in Iran, it seems configured to target a specific control system, and it uses 0-days wastefully (I see a claim that it uses four 0-days and 2 stolen certificates). On the other hand, this is not inconsistent with an AI going after a semiconductor manufacturer and throwing in some Iranian targets as a distraction. My preference ordering is friendly AI, humans, unfriendly AI; my probability ordering is humans, unfriendly AI, friendly AI.

In light of XFrequentist's suggestion in "More Art, Less Stink," would anyone be interested in a post consisting of a summary & discussion of Cialdini's Influence?

This is a brilliant book on methods of influencing people. But it's not just Dark Arts - it also includes defense against the Dark Arts!

0jimmy
I just finished reading that book. It is mostly from a "defense against" perspective. Reading the chapter names provides a decent [extremely short] summary, and I expect that you're already aware that they are influences. That said, when I read through it, there were a lot of "Aha!" moments, when I realized something I've seen was actually a well thought out 'weapon of influence'- and now my new hobby is saying "Chapter 3: Commitment and Consistency!" every time I see it used as persuasion. The whole book is hard to put down, and makes me want to quote part of it to the nearest person in about every paragraph or two. I'd consider writing such a post, but I'm not sure how to compress it- the very basics should be obvious to the regulars here, but the details take time to flush out.
0CronoDAS
Yes, I would like such a post.

Idea - Existential risk fighting corporates

People of normal IQ are advised to work our normal day job, the best competency that we have and after setting aside enough money for ourselves, contribute to prevention of existential risk. That is a good idea if the skills of the people here are getting their correct market value and there is such a diversity of skills that they cannot make a sensible corporation together.

Also, consider that as we make the world's corporations more agile, we bring closer the moment where an unfriendly optimization process might ... (read more)

3wedrifid
The would be more likely to work if you completely took out the 'for existential risk' part. Find a way to cooperate with people effectively "to make money". No need to get religion all muddled up in it.

I would like to see more on fun theory. I might write something up, but I'd need to review the sequence first.

Does anyone have something that could turn into a top level post? or even a open thread comment?

I used to be a professional games programmer and designer and I'm very interested in fun. There are a couple of good books on the subject: A theory of fun and Rules of play. As a designer I spent many months analyzing sales figures for both computer games and other conventional toys. The patterns within them are quite interesting: for example child's toys pass from amorphous learning tools (bright objects and blobby humanoids), through mimicking parents (accurate baby dolls), to mimicking older children (sexualised dolls and makeup). My ultimate conclusions were that fun takes many forms whose source can be ultimately reduced to what motivates us. In effect, fun things are mental hacks of our intrinsic motivations. I gave a couple of talks on my take on what these motivations are. I'd be happy to repeat this material here (or upload and link to the videos if people prefer).

4Mass_Driver
I found Rules of Play to be little more than a collection of unnecessary (if clearly-defined) jargon and glittering generalities about how wonderful and legitimate games are. Possibly an alien or non-neurotypical who had no idea what a game was might gather some idea of games from reading the book, but it certainly didn't do anything for me to help me understand games better than I already do from playing them. Did I miss something?
5JohnDavidBustard
Yes I take your point. There isn't a lot of material on fun, and game design analysis is often very genre specific. I like rules of play, not so much because it provides great insight into why games are fun but more as a first step towards being a bit more rigorous about what game mechanics actually are. There is definitely a lot further to go and there is a tendency to ignore the cultural and psychological motivations (e.g. why being a gangster and free roaming mechanics work well together) in favour of analysing abstract games. However it is fascinating to imagine a minimal game, in fact some of the most successful game titles have stripped the interactions down to their most basic motivating mechanics (Farmville or Diablo for example) To provide a concrete example, I worked on a game (Medievil Resurrection) where the player controlled a crossbow in a minigame, by adjusting the speed and acceleration of the mapping between joystick and bow the sensation of controlling it passed through distinct stages. As the parameters approach the sweet spot, my mind (and that of other testers) experienced a transition from feeling I was controlling the bow indirectly to feeling like I was holding the bow. Deviating slightly around this value adjusted its perceived weight, but there was a concrete point at which this sensation was lost. Although Rules of Play does not cover this kind of material it did feel for me like an attempt to examine games in a more general way so that these kinds of element could be extracted from their genre specific contexts and be understood in isolation.
0JamesAndrix
Will upvote
6komponisto
I've long had the idea of writing a sequence on aesthetics; I'm not sure if and when I'll ever get around to it, however. (I have a fairly large backlog of post ideas that have yet to be realized.)

Is there enough interest for it to be worth creating a top level post for an open thread discussing Eliezer's Coherent Extrapolated Volition document? Or other possible ideas for AGI goal systems that aren't immediately disastrous to humanity? Or is there a top level post for this already? Or would some other forum be more appropriate?

The Onion parodies cyberpunk by describing our current reality: http://www.theonion.com/articles/man-lives-in-futuristic-scifi-world-where-all-his,17858/

An observer is given a box with a light on top, and given no information about it. At time t0, the light on the box turns on. At time tx, the light is still on.

At time tx, what information can the observer be said to have about the probability distribution of the duration of time that the light turns on? Obviously the observer has some information, but how is it best quantified?

For instance, the observer wishes to guess when the light will turn off, or find the best approximation of E(X | X > tx-t0), where X ~ duration of light being on. This is guarant... (read more)

Finally Prompted by this, but it would be too offtopic there

http://lesswrong.com/lw/2ot/somethings_wrong/

The ideas really started forming around the recent 'public relations' discussions.

If we want to change people's minds, we should be advertising.

I do like long drawn out debates, but most of the time they don't accomplish anything and even when they do, they're a huge use of personal resources.

There is a whole industry centered around changing people's minds effectively. They have expertise in this, and they do it way better than we do.

2Perplexed
My guess is that "Harry Potter and the Methods of Rationality" is the best piece of publicity the SIAI has ever produced. I think that the only way to top it would be a Singularity/FAI-themed computer game. How about a turn-based strategy game where the object is to get deep enough into the singularity to upload yourself before a uFAI shows up and turns the universe into paper clips? Maybe it would work, and maybe not, but I think that the demographic we want to reach is 4chan - teenage hackers. We need to tap into the "Dark Side" of the Cyberculture.
[-]ata100

How about a turn-based strategy game where the object is to get deep enough into the singularity to upload yourself before a uFAI shows up and turns the universe into paper clips?

I don't think that would be very helpful. Advocating rationality (even through Harry Potter fanfiction) helps because people are better at thinking about the future and existential risks when they care about and understand rationality. But spreading singularity memes as a kind of literary genre won't do that. (With all due respect, your idea doesn't even make sense: I don't think "deep enough into the singularity" means anything with respect to what we actually talk about as the "singularity" here (successfully launching a Friendly singularity probably means the world is going to be remade in weeks or days or hours or minutes, and it probably means we're through with having to manually save the world from any remaining threats), and if a uFAI wants to turn the universe into paperclips, then you're screwed anyway, because the computer you just uploaded yourself into is part of the universe.)

Unfortunately, I don't think we can get people excited about bringing about a Friendly singular... (read more)

3Perplexed
I am impressed. A serious and thoughtful reply to a maybe serious, but definitely not thoughtful, suggestion. Thank you. "Actively evil" is not "inherently evil". The action currently is over on the evil side because the establishment is boring. Anti-establishment evil is currently more fun. But what happens if the establishment becomes evil and boring? Could happen on the way to a friendly singularity. Don't rule any strategies out. Thwarting a nascent uFAI may be one of the steps we need to take along the path to FAI.
5ata
Thank you for taking it well; sometimes I still get nervous about criticizing. :) I've heard the /b/ / "Anonymous" culture described as Chaotic Neutral, which seems apt. My main concern is that waiting for the right thing to become fun for them to rebel against is not efficient. (Example: Anonymous's movement against Scientology began not in any of the preceding years when Scientology was just as harmful as always, but only once they got an embarrassing video of Tom Cruise taken down from YouTube. "Project Chanology" began not as anything altruistic, but as a morally-neutral rebellion against what was perceived as anti-lulz. It did eventually grow into a larger movement including people who had never heard of "Anonymous" before, people who actually were in it to make the world a better place whether the process was funny or not. These people were often dismissed as "moralfags" by the 4chan old-timers.) Indeed they are not inherently evil, but when morality is not a strong consideration one way or the other, it's too easy for evil to be more fun than good. I would not rely on them (or even expect them) to accomplish any long-term good when that's not what they're optimizing for. (And there's the usual "herding cats" problem — even if something would normally seem fun to them, they're not going to be interested if they get the sense that someone is trying to use them.) Maybe some useful goal that appeals to their sensibilities will eventually present itself, but for now, if we're thinking about where to direct limited resources and time and attention, putting forth the 4chan crowd as a good target demographic seems like a privileged hypothesis. "Teenage hackers" are great (I was one!), but I'm not sure about reaching out to them once they're already involved in 4chan-type cultures. There are probably better times and places to get smart young people interested.
0jacob_cannell
What ideas? I'm pretty sure I find whatever you are talking about interesting and shiny, but I'm not quite sure what it even is.
0JamesAndrix
Any ideas. For the SIAI it would probably be existential risks then UFAI later, in general it could be rationality or evolution or atheism or whatever.
0jacob_cannell
What is the whole industry you speak of? Self-help, religion, marketing? And what additional advertising? I think that spreading the ideas is important as well, I"m just not sure what you are considering.
2JamesAndrix
Advertising/marketing. Short of ashiest bus ads, I can't think of anything that's been done. All I'm really suggesting is that we focus on mass persuasion in the way it has been proven to be most efficient. What that actually amounts to will depend on the target audience, and how much money is available, among other things.
2jacob_cannell
Did you mean "atheist bus ads"? I actually find strict-universal-atheism to be irrational compared to agnosticism because of the SA and the importance of knowing the limits of certainty, but that's unrelated and I digress. I've long suspected that writing popular books on the subject would be an effective strategy for mass persuasion. Kurzweil has certainly had a history of some success there, although he also brings some negative publicity due to his association with dubious supplements and the expensive SingUniversity. It will be interesting to see how EY's book turns out and is received. I'm actually skeptical about how far rationality itself can go towards mass persuasion. Building a rational case is certainly important, but the content of your case is even more important (regardless of its rationality). On that note I suspect that bridging a connection to the mainstream's beliefs and values would go a ways towards increasing mass marketability. You have to consider not just the rationality of ideas, but the utility of ideas. It would be interesting to analyze and compare how emphasizing the hope vs doom aspects of the message would effect popularity. SIAI at the moment appears focused on emphasizing doom and targeting a narrow market: a subset of technophile 'rationalists' or atheist intellectuals and wooing academia in particular. I'm interested in how you'd target mainstream liberal christians or new agers, for example, or even just the intellectual agnostic/atheist mainstream - the types of people who buy books such as the End of Faith, Breaking the Spell, etc etc. Although a good portion of that latter demographic is probably already exposed to the Singularity is Near.
0JamesAndrix
I'm not sure what I'd do, but I'm not a marketing expert either. (Though I am experimenting) It would probably be possible to make a campaign that took advantage of UFAI in sci-fi. AI's taking over the world isn't a difficult concept to get across, so the ad would just need to persuade that it's possible in reality, and there is a group working towards a solution. I hope you haven't forgotten our long drawn out discussion, as I do think that one is worthwhile.
6ata
AIs taking over the world because they have implausibly human-like cognitive architectures and they hate us or resent us or desire higher status than us is an easy concept to get across. It is also, of course, wrong. An AI immediately taking apart the world to use its mass for something else because its goal system is nothing like ours and its utility function doesn't even have a term for human values is more difficult; because of anthropomorphic bias, it will be much less salient to people, even if it is more probable.
1JamesAndrix
They have the right conclusion (plausible AI takeover) for slightly wrong reasons. "hate [humans] or resent [humans] or desire higher status than [humans]" are slightly different values than ours (even if just like the values humans often have towards other groups) So we can gradually nudge people closer to the truth a bit at a time by saying "Plus, it's unlikely that they'll value X, so even if they do something with the universe it will not have X" But we don't have to introduce them to the full truth immediately, as long as we don't base any further arguments on falsehoods they believe. If someone is convinced of the need for asteroid defense because asteroids could destroy a city, you aren't obligated to tell them that larger asteroids could destroy all humanity when you're asking for money. Even if you believe bigger asteroids to be more likely. I don't think it's dark epistemology to avoid confusing people if they've already got the right idea.
3Vladimir_Nesov
Writing up high-quality arguments for your full position might be a better tool than "nudging people closer to the truth a bit at a time". Correct ideas have a scholar appeal due to internal coherence, even if they need to overcome plenty of cached misconceptions, but making that case requires a certain critical mass of published material.
2JamesAndrix
I do see value in that, but I'm thinking of a TV commercial or youtube video with a terminator style look and feel. Though possibly emphasizing that against real superintelligence, there would be no war. I can't immediately remember a way to simplify "the space of all possible values is huge and human like values are a tiny part of that" and I don't think that would resonate at all.
0jacob_cannell
A large portion of the world has already seen a Terminator flick, or the Matrix. The AI-is-evil-nonhuman-threat meme is already well established in the wild, to the point of characterture. The AI-is-an-innocent-child meme wasn't as popular - AI is the only example I can think of and not many people saw it. And even though the Terminator and the Matrix are far from realistic, they did at least get the general shape of the outcome correct - humans lose. What would your message add over this in reach or content? At this point the meme is almost oversaturated and it is difficult for people to take seriously. Did "The Day After Tommorrow" help or hinder the environmental movement?
0JamesAndrix
This might not fit the terminator motif anymore, but: That there are people working on a way to target AI development so it reliably looks more like R2D2, Johnny 5, Commander Data, Sonny, Marvin... ok that's all I can think of but just for fun I'll get these from wikipedia: Gort, Bishop from aliens, almost everything from the jetsons, Transformers (autobots anyway), the Iron Giant, and KITT And again we don't have to explain that AI done right will be orders of magnitude more helpful than any of these.
0jacob_cannell
It's interesting that friendly-AI was so common in earlier decades and then this seemed to shift in the 90's. As for AI-positive advertisements, that somehow reminded me. . . did you ever see that popular web-viral anti-banking video called Zeitgeist? In the sequel he seems to have realized that just being a critic wasn't enough, so suddenly the 2nd part of Zeitgeist addendum turns into a startrek-ish utopia proposal out of nowhere. I forget the name, but it is basically some architect's pseudo-singularity (AI solves all our problems and makes these beautiful new cities for us but isn't really conscious or dangerous). I went to a screening of that film in LA, and I was amazed at how entranced the audience seemed to be. The questions at the end were pretty funny too - "so .. there won't be any money? And the AI's will build us whatever we want?" "Yes" "So, what if I want to turn all of Texas into my house?" . . .
1timtyler
You are thinking of Jacque Fresco.
-3jacob_cannell
I actually come from that outside-LW viewpoint that finds the former scenario involving "human-like cognitive architectures" as vastly more probable than "AI immediately taking apart the world to use its mass for something else because its goal system is nothing like ours and its utility function doesn't even have a term for human values". So it could be that your viewpoint is more likely, and the rest of us are suffering from "anthropomorphic bias", but it also could be that anthropomorphic bias is in fact a self-fulfilling prophecy.
1ata
I don't see how. We could get something like that if we get uploads before AGI, but that would really be more like an enhanced human taking over the world. Aside from that, where's the self-fulfilling prophecy? If people expect AGIs to exhibit human-like emotions and primate status drives and go terribly wrong as a result, why does that increase the chance that the creators of the first powerful AGI will build human-like emotions and primate status drives into it?
0jacob_cannell
Actual uploads are a far end point along a continuum of human-like cognitive architectures, and have the additional complexity of scanning technology which lags far behind electronics. You don't need uploads for anthropomorphic AI - you just need to loosely reverse engineer the brain. Also, "human-like cognitive architectures" is a wide spectrum that does not require human-like emotions or primate status drives - consider the example of Alexithymia. Understanding human languages is a practical prerequisite for any AI to reach high levels of intelligence, and the implied anthropomorphic cognitive capacities required for true linguistic thinking heavily constrains the design space. The self-fulfilling prophecy is that anthropomorphic AI will be both easier for us to create and more useful for us - so the bias is correct in a self-reinforcing manner.
[-]Cyan30

Nine years ago today, I was just beginning my post-graduate studies. I was running around campus trying to take care of some registration stuff when I heard that unknown parties had flown two airliners into the WTC towers. It was surreal -- at that moment, we had no idea who had done it, or why, or whether there were more planes in the air that would be used as missiles.

It was big news, and it's worth recalling this extraordinarily terrible event. But there are many more ordinary terrible events that occur every day, and kill far more people. I want to kee... (read more)

The Science of Word Recognition, by a Microsoft researcher, contains tales of reasonably well done Science gone persistently awry, to the point that the discredited version is today the most popular one.

4Clippy
That's a really good article, the Microsoft humans really know their stuff.
0wedrifid
Wow! I'm certainly surprised!
0Sniffnoy
I don't know, I'm thinking the idea that this wouldn't happen (which I had as well) may be a case of "living in the 'should universe'"...

Apologies if this question seems naive but I would really appreciate your wisdom.

Is there a reasonable way of applying probability to analogue inference problems?

For example, if two substances A and B are being measured using a device which produces an analogue value C. Given a history of analogue values, how does one determine the probability of each substance. Unless the analogue values match exactly, how can historical information contribute to the answer without making assumptions of the shape of the probability density function created by A or B? If... (read more)

5Perplexed
Your examples, certainly show a grasp of the problem. The solution is first sketched in Chapter 4.6 of Jaynes Definitely. Jaynes finishes deriving the inference rules in Chapter 2 and illustrates how to use them in Chapter 3. The remainder of the book deals with "the real challenge". In particular Chapters 6, 7, 12, 19, and especially 20. In effect, you use Bayesian inference and/or Wald decision theory to choose between underlying models pretty much as you might have used them to choose between simple hypotheses. But there are subtleties, ... to put things mildly. But then classical statistics has its subtleties too.

Since the Open Thread is necessarily a mixed bag anyway, hopefully it's OK if I test Markdown here

test deleted

I have been following this site for almost a year now and it is fabulous, but I haven't felt an urgent need to post to the site until now. I've been working on a climate change project with a couple of others and am in desperate need of some feedback.

I know that climate change isn't a particularly popular topic on this website (but I'm not sure why, maybe I missed something, since much of the website seems to deal with existential risk. Am I really off track here?), but I thought this would be a great place to air these ideas. Our approach tries to tackl... (read more)

-1allenwang
(sorry of this comment is too long, continued from above) Creating Incentives Of course, a sense of public pride exists in many people, and this has led large numbers of people to learn about the issues without external inducements. But the population of educated voters could be vastly increased if there were these personal benefits, especially for groups where environmentalism has not become a positive norm. While we have thought about other approaches to creating these wide-ranging personal incentives, specifically, material prizes and the intangible benefits of social networking and personal pride (such as are behind Wikipedia or Facebook’s success), it appears that these are difficult to apply to the issue of climate change. Material prizes would be costly to fund, especially to make them worth the several hours necessary to learn about the issues. The issues are difficult enough, and the topic possibly scary enough, that it is not necessarily fun to learn about them and discuss with your friends. For another, it takes time and a little bit of dedicated thinking to achieve an adequate understanding of the problem, but part of the incentive to do so on Wikipedia—to show off your genuine expertise on the topic, even if anonymous—is exactly not what is supposed to happen when there is an educated populace on the topic: you will not be a unique expert, just another person who understands the issue like everyone else. The sense of urgency and personal importance needed to spur people to learn just is not there with these modes of incentivization. But there is one already extremely effective way that companies, schools, and other organizations incentivize behavior that has little to do with immediate personal benefits. These institutions use their ability to advance or deter people’s future careers to motivate performance in certain areas. The gatekeepers to these future prospects can use their position to bring about all kinds of behavior that would otherwise seem
1CronoDAS
This seems to have the same problem as teaching evolution in high school biology classes: you can pass a test on something and not believe a word of it. Cracking an information cocoon can be damn hard; just consider how unusual religious conversions are, or how rarely people change their minds on such subjects as UFOs, conspiracy theories, cryonics, or any other subject that attracts cranks. Also, why should employers care about a person's climate change test score? Finally, why privilege knowledge about climate change, or all things, by using it for gatekeeping, instead of any of the many non-controversial subjects normally taught in high schools, for which SAT II subject tests already exist?

The gap between inventing formal logic and understanding human intelligence is as large as the gap between inventing formal grammars and understanding human language.

1Vladimir_Nesov
Human intelligence, certainly; but just intelligence, I'm not so sure.

Friday's Wondermark comic discusses a possible philosophical paradox that's similar to those mentioned at Trust in Bayes and Exterminating life is rational.

1Nisan
You beat me to it :)
[-]knb20

Recently there was a discussion regarding Sex at Dawn. I recently skimmed this book at a friend's house, and realized that the central idea of the book is dependent on a group selection hypothesis. (The idea being that our noble savage bonobo-like hunter-gatherer ancestors evolved a preference for paternal uncertainty as this led to better in group cooperation.) This was never stated in the sequence of posts on the book. Can someone who has read the book confirm/deny the accuracy of my impression that the book's thesis relies on a group selection hypothesis?

0timtyler
A blog says: "The model proposed by Christopher Ryan in “Sex at Dawn” (the women in a group have sex with the men in their group, and uncertain paternity leads all of the men to feel responsible for providing for the children) ..." That doesn't require group selection.
-1WrongBot
No, but the book relies on kin selection to some extent: it's beneficial to share resources with your tribe, but not other tribes.

Since Eliezer has talked about the truth of reductionism and the emptiness of "emergence", I thought of him when listening to Robert Laughlin on EconTalk (near the end of the podcast). Laughlin was arguing that reductionism is experimentally wrong and that everything, including the universal laws of physics, are really emergent. I'm not sure if that means "elephants all the way down" or what.

3Will_Sawin
It's very silly. What he's saying is that there are properties at high levels of organizations that don't exist at low levels of organizations. As Eliezer says, emergence is trivial. Everything that isn't quarks is emergent. His "universality" argument seems to be that different parts can make the same whole. Well of course they can. He certainly doesn't make any coherent arguments. Maybe he does in his book?
3Perplexed
Yet another example of a Nobel prize winner in disagreement with Eliezer within his own discipline. What is wrong with these guys? Why if they would just read the sequences, they would learn the correct way for words like "reduction" and "emergence" to be used in physics.
4khafra
To be fair, "reductionism is experimentally wrong" is a statement that would raise some argument among Nobel laureates as well.
2Perplexed
Argument from some Nobelists. But agreement from others. Google on the string "Philip Anderson reductionism emergence" to get some understanding of what the argument is about. My feeling is that everyone in this debate is correct, including Eliezer, except for one thing - you have to realize that different people use the words "reductionism" and "emergence" differently. And the way Eliezer defines them is definitely different from the way the words are used (by Anderson, for example) in condensed matter physics.
2khafra
If the first hit is a fair overview, I can see why you're saying it's a confusion in terms; the only outright error I saw was confusing "derivable" with "trivially derivable." If you're saying that nobody important really tries to explain things by just saying "emergence" and handwaving the details, like EY has suggested, you may be right. I can't recall seeing it. Of course, I don't think Eliezer (or any other reductionist) has said that throwing away information so you can use simpler math isn't useful when you're using limited computational power to understand systems which would be intractable from a quantum perspective, like everything we deal with in real life.
[-]taw20

A question about modal logics.

Temporal logics are quite successful in terms of expressiveness and applications in computer science, so I thought I'd take a look at some other modal logics - in particular deontic logic that deal with obligations, rules, and deontological ethics.

It seems like an obvious approach, as we want to have "is"-statements, "ought"-statements, and statements relating what "is" with what "ought" to be.

What I found was rather disastrous, far worse than with neat and unambiguous temporal logics. L... (read more)

Someone made a page that automatically collects high karma comments. Could someone point me at it please?

1Kazuo_Thow
Here's the Open Thread comment where Daniel Varga made the page and its source code public. I don't know how often it's updated.
0RobinZ
Note that the page in question collects only comments on Rationality Quotes pages.
0Oscar_Cunningham
Yay, thank you! Also, that page is large, large enough to make my brand new computer lag horrendously.
1wedrifid
They did? I've been wishing for something like that myself. I'd also like another page that collects just my high karma comments. Extremely useful feedback!

The penny has just dropped! When I first encountered LessWrong, the word 'Rationality' did not stand out. I interpreted it to mean its everyday meaning of careful, intelligent, sane, informed thought (in keeping with 'avoiding bias'). But I have become more and more uncomfortable with the word because I see it having a more restricted meaning in the LW context. At first, I thought this was an economic definition of the 'rational' behaviour of the selfish and unemotional ideal economic agent. But now I sense an even more disturbing definition: rational as opposed to empirical. As I use scientific evidence as the most important arbiter of what I believe, I would find the anti-empirical idea of 'rational' a big mistake.

3thomblake
The philosophical tradition of 'Rationalism' (opposed to 'Empiricism') is not relevant to the meaning here. Though there is some relationship between it and "Traditional Rationality" which is referenced sometimes.
2kodos96
Ummmmmmmm.... no. The word "rational" is used here on LW in essentially its literal definition (which is not quite the same as its colloquial everyday meaning).... if anything it is perhaps used by some to mean "bayesian"... but bayesianism is all about updating on (empirical) evidence.
1JanetK
According to my dictionary: rationalism 1. Philos. the theory that reason is the foundation of certainty in knowledge (opp. empiricism, sensationalism) This is there as well as: rational 1. of or based on reasoning or reason So although there are other (more everyday) definitions also listed at later numbers, the opposition to empirical is one of the literal definitions. The Bayesian updating thing is why it took me a long time to notice the other anti-scientific tendency.
4timtyler
I wouldn't say "anti-scientific" - but it certainly would be good if scientists actually studied rationality more - and so were more rational. With lab equipment like the human brain, you have really got to look into its strengths and weaknesses - and read the manual about how to use it properly. Personally, when I see material like Science or Bayes - my brain screams: false dichotomy: Science and Bayes! Don't turn the scientists into a rival camp: teach them.
0JanetK
I think you may have misunderstood what I was trying to say. Because the group used Bayesian methods, I had assumed that they would not be anti-scientific. I was surprised when it seemed that they were willing to ignore evidence. I have been reassured that many in the group are rational in the everyday sense and not opposed to empiricism. Indeed it is Science AND Bayes.
2wedrifid
Indeed. It is heretic in the extreme! Burn them!
0JanetK
Do you have a reason of sarcasm? I notice a tendency that seems to me disturbing and I am pointing it out to see if others have noticed it and have opinions, but I am not attacking. I am deciding whether I fit this group or not - hopefully I can feel comfortable in LW.
3wedrifid
It felt like irony from my end - a satire of human behaviour. As a general tendency of humanity we seem to be more inclined to be abhored by beliefs that are similar to what we consider the norm but just slightly different. It is the rebels within the tribe that are the biggest threat, not the tribe that lives 20 kms away. I hope someone can give you an adequate answer to your question. The very short one is that empirical evidence is usually going to be the most heavily weighted 'bayesian' (rational) evidence. However everything else is still evidence, even though it is far weaker.
2Emile
I don't think that's how most people here understand "rationalism".
1JanetK
Good
1timtyler
There is at least one post about that - though I don't entirely approve of it. Occam's razor is not exactly empirical. Evidence is involved - but it does let you choose between two theories both of which are compatible with the evidence without doing further observations. It is not empirical - in that sense.
1Kenny
Occam's razor isn't empirical, but it is the economically rational decision when you need to use one of several alternative theories (that are exactly "compatible with the evidence"). Besides, "further observations" are inevitable if any of your theories are actually going to be used (i.e. to make predictions [that are going to be subsequently 'tested']).
0[anonymous]
Now that I come to think of it, I've never seen the LW definition of "rationality" used anywhere outside LW and OB, and I've never even seen it explicitly defined. EDIT: http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/ But if you asked me, I would say it means taking your selfish and unemotional economic agent to his logical extreme: rationally examining one's own thought processes in order to optimise them, rationally examining scientific evidence without interference from one's biases, and rationally accepting the possibility that one has made a mistake.
0Sniffnoy
Here is our definition of rationality. See also the "unnamed virtue".
5thomblake
No, here is our definition of rationality.
3JanetK
Thank you. That seems clear. I will assume that my antennas were giving me the wrong impression. I can relax/
2[anonymous]
Maybe you shouldn't relax. Regardless of official definitions, there is in practice a heavy emphasis on conceptual rigor over evidence. There's still room for people who don't quite fit in.
1Sniffnoy
Ah, that does seem to be better, yes.
0FAWS
In a certain sense rationality is using evidence efficiently. Perhaps overemphasis on that type of rationality tempts one to be sparing with evidence - after all if you use less evidence to reach your conclusion you used whatever evidence you did use more efficiently! But not using evidence doesn't mean there is more evidence left afterwards, not using free or very cheap evidence is wasteful, so proper rationality, even in that sense, means using all easily available evidence when practical.
0Houshalter
I'm not sure I follow, why leave certain observations out of your judgement to "use evidence efficiently"? Do you mean to use your resources efficiently, like time and brain power? In that case, you can just define it as using resources as efficiently as possible. You need evidence to gain knowledge, you need knowledge to base theories, and you need theories to decide how to most effectively spend your resources, which can be spent on anything including finding more evidence in the first place.
0FAWS
My point was that it doesn't make sense. Even when trying to use evidence efficiently you should use all evidence (barring the considerations from Frugality and working from finite data, which are only relevant due to certain biases)

Grab the popcorn! Landsburg and I go at it again! (See also Previous Landsburg LW flamewar.)

This time, you get to see Landsburg:

  • attempt to prove the existence of the natural numbers while explicitly dismissing the relevance of what sense he's using "existence" to mean!
  • use formal definitions to make claims about the informal meanings of the terms!
  • claim that Peano arithmetic exists "because you can see the marks on paper" (guess it's not a platonic object anymore...)!

(Sorry, XiXiDu, I'll reply to you on his blog if my posting priv... (read more)

3DanielVarga
Wow, a debate where the most reasonable-sounding person is a sysop of Conservapedia. :)
0SilasBarta
Who?
2DanielVarga
Roger Schlafly. Or Roger Schlafly, if you prefer that. His blog is Singular Values. His whole family is full of very interesting people.
0[anonymous]
I always find these entertaining, though I begin to despair of human nature after a while. Thanks for letting me watch.

Is the Open Thread now deprecated in favour of the Discussion section? If so, I suggest an Open Thread over there for questions not worked out enough for a Discussion post. (I have some.)

>equals(correct_reasoning , Bayesian_inference)

1Clippy
This server is really slow.

How diverse is Less Wrong? I am under the impression that we disproportionately consist of 20-35 year old white males, more disproportionately on some axes than on others.

We obviously over-represent atheists, but there are very good reasons for that. Likewise, we are probably over-educated compared to the populations we are drawn from. I venture that we have a fairly weak age bias, and that can be accounted for by generational dispositions toward internet use.

However, if we are predominately white males, why are we? Should that concern us? There's nothing... (read more)

[-]gwern120

This sounds like the same question as why are there so few top-notch women in STEM fields, why there are so few women listed in Human Accomplishment's indices*, why so few non-whites or non-Asians score 5 on AP Physics, why...

In other words, here be dragons.

* just Lady Murasaki, if you were curious. It would be very amusing to read a review of The Tale of Genji by Eliezer or a LWer. My own reaction by the end was horror.

4datadataeverywhere
That's absolutely true. I've worked for two US National Labs, and both were monocultures. At my first job, the only woman in my group (20 or so) was the administrative assistant. At my second, the numbers were better, but at both, there were literally no non-whites in my immediate area. The inability to hire non-citizens contributes to the problem---I worked for Microsoft as well, and all the non-whites were foreign citizens---but it's not as if there aren't any women in the US! It is a nearly intractable problem, and I think I understand it fairly well, but I would very much like to hear the opinion of LWers. My employers have always been very eager to hire women and minorities, but the numbers coming out of computer science programs are abysmal. At Less Wrong, a B.S. or M.S. in a specific field is not a barrier to entry, so our numbers should be slightly better. On the other hand, I have no idea how to go about improving them. The Tale of Genji has gone on my list of books to read. Thanks!
6gwern
Yes, but we are even more extreme in some respects; many CS/philosophy/neurology/etc. majors reject the Strong AI Thesis (I've asked), while it is practically one of our dogmas. I realize that I was a bit of a tease there. It's somewhat off topic, but I'll include (some of) the hasty comments I wrote down immediately upon finishing: The prevalence of poems & puns is quite remarkable. It is also remarkable how tired they all feel; in Genji, poetry has lost its magic and has simply become another stereotyped form of communication, as codified as a letter to the editor or small talk. I feel fortunate that my introductions to Japanese poetry have usually been small anthologies of the greatest poets; had I first encountered court poetry through Genji, I would have been disgusted by the mawkish sentimentality & repetition. The gender dynamics are remarkable. Toward the end, one of the two then main characters becomes frustrated and casually has sex with a serving lady; it's mentioned that he liked sex with her better than with any of the other servants. Much earlier in Genji (it's a good thousand pages, remember), Genji simply rapes a woman, and the central female protagonist, Murasaki, is kidnapped as a girl and he marries her while still what we would consider a child. (I forget whether Genji sexually molests her before the pro forma marriage.) This may be a matter of non-relativistic moral appraisal, but I get the impression that in matters of sexual fidelity, rape, and children, Heian-era morals were not much different from my own, which makes the general immunity all the more remarkable. (This is the 'shining' Genji?) The double-standards are countless. The power dynamics are equally remarkable. Essentially every speaking character is nobility, low or high, or Buddhist clergy (and very likely nobility anyway). The characters spend next to no time on 'work' like running the country, despite many main characters ranking high in the hierarchy and holding ministral r

How diverse is Less Wrong?

You may want to check the survey results.

2Relsqui
Thank you; that was one of the things I'd come to this thread to ask about.
1datadataeverywhere
Thank you very much. I looked for but failed to find this when I went to write my post. I had intended to start with actual numbers, assuming that someone had previously asked the question. The rest is interesting as well.
9cousin_it
Ignoring the obviously political issue of "concern", it's fun to consider this question on a purely intellectual level. If you're a white male, why are you? Is the anthropic answer ("just because") sufficient? At what size of group does it cease to be sufficient? I don't know the actual answer. Some people think that asking "why am I me" is inherently meaningless, but for me personally, this doesn't dissolve the mystery.
4datadataeverywhere
The flippant answer is that a group size of 1 lacks statistical significance; at some group size, that ceases to be the case. I asked not from a political perspective. In arguments about diversity, political correctness often dominates. I am actually interested in, among other things, whether a lack of diversity is a functional impairment for a group. I feel strongly that it is, but I can't back up that claim with evidence strong enough to match my belief. For a group such as Less Wrong, I have to ask what we miss due to a lack of diversity.
6cousin_it
The flippant answer to your answer is that you didn't pick LW randomly out of the set of all groups. The fact that you, a white male, consistently choose to join groups composed mostly of white males - and then inquire about diversity - could have any number of anthropic explanations from your perspective :-) In the end it seems to loop back into why are you, you again. ETA: apparenty datadataeverywhere is female.
-1datadataeverywhere
No, I think that's a much less flippant answer :-)
0[anonymous]
It's come to my attention that you're female. Apologies for assuming otherwise, and shame on you for not correcting me.
6NancyLebovitz
I've been thinking that there are parallels between building FAI and Talmud-- it's an effort to manage an extremely dangerous, uncommunicative entity through deduction. (An FAI may be communicative to some extent. An FAI which hasn't been built yet doesn't communicate.) Being an atheist doesn't eliminate cultural influence. Survey for atheists: which God do you especially not believe in? I was talking about FAI with Gene Treadwell, who's black. He was quite concerned that the FAI would be sentient, but owned and controlled. This doesn't mean that either Eliezer or Gene are wrong (or right for that matter), but it suggests to me that culture gives defaults which might be strong attractors. [1] He recommended recruiting Japanese members, since they're more apt to like and trust robots. I don't know about explaining ourselves, but we may need more angles on the problem just to be able to do the work. [1] See also Timothy Leary's S.M.I.2L.E.-- Space Migration, Increased Intelligence, Life Extension. Robert Anton Wilson said that was match for Catholic hopes of going to heaven, being trajnsfigured, and living forever.
5[anonymous]
He has a very good point. I was surprised more Japanese or Koreans hadn't made their way to Lesswrong. This was my motivation for first proposing we recruit translators for Japanese and Chinese and to begin working towards a goal of making at least the sequences available in many languages. Not being a native speaker of English proved a significant barrier for me in some respects. The first noticeable one was spelling, I however solved the problem by outsourcing this part of the system known as Konkvistador to the browser. ;) Other more insidious forms of miscommunication and cultural difficulties persist.
5Wei Dai
I'm not sure that it's a language thing. I think many (most?) college-educated Japanese, Koreans, and Chinese can read and write in English. We also seem to have more Russian LWers than Japanese, Koreans, and Chinese combined. According to a page gwern linked to in another branch of the thread, among those who got 5 on AP Physics C in 2008, 62.0% were White and 28.3% were Asian. But according to the LW survey, only 3.8% of respondents were Asian. Maybe there is something about Asian cultures that makes them less overtly interested in rationality, but I don't have any good ideas what it might be.
2Vladimir_Nesov
All LW users display near-native control of English, which won't be as universal, and typically requires years-long consumption of English content. English-speaking world is the default source of non-Russian content for Russians, but it might not be the case with native Asians (what's your impression?)
4Wei Dai
My impression is that for most native Asians, the English-speaking world is also their default source of non-native-language content. I have some relatives in China, and to the extent they do consume non-Chinese content, they consume English content. None of them consume enough of it to obtain near-native control of English though. I'm curious, what kind of English content did you consume before you came across OB/LW? How typical do you think that level of consumption is in Russia?
2Perplexed
Unfortunately, browser spell checkers usually can't help you to spell your own name correctly. ;) That is one advantage to my choice of nym.
0wedrifid
Right click, add to dictionary. If that doesn't work then get a better browser.
0[anonymous]
Ehm, you do realize he was making a humorous remark about "Konkvistador" being my user name right? Edit: Well its all clearly Alicorn's fault. ;)
2Perplexed
Actually it was more about Konkivstador not being your name.
0[anonymous]
I do now. Sorry about that.
5Perplexed
I generally agree with your assessment. But I think there may be more East and South Asians than you think, more 36-80s and more 15-19s too. I have no reason to think we are underrepresented in gays or in deaf people. My general impression is that women are not made welcome here - the level of overt sexism is incredibly high for a community that tends to frown on chest-beating. But perhaps the women should speak for themselves on that subject. Or not. Discussions on this subject tend to be uncomfortable, Sometimes it seems that the only good they do is to flush some of the more egregious sexists out of the closet.
3timtyler
We have already had quite a lot of that.
2Perplexed
OMG! A whole top-level-posting. And not much more than a year ago. I didn't know. Well, that shows that you guys (and gals) have said all that could possibly need to be said regarding that subject. ;) But thx for the link.
1timtyler
It does have about 100 pages of comments. Consider also the "links to followup posts" in line 4 of that article. It all seemed to go on forever - but maybe that was just me.
2Perplexed
Ok. Well, it is on my reading list now. Again, thx.
3[anonymous]
I don't know why you presume that because we are mostly 25-35 something White males a reasonable proportion of us are not deaf, gay or disabled (one of the top level posts is by someone who will soon deal with being perhaps limited to communicating with the world via computer) I smell a whiff of that weird American memplex for minority and diversity that my third world mind isn't quite used to, but which I seem to encounter more and more often, you know the one that for example uses the word minority to describe women. Also I decline to invitation to defend this community for lack of diversity, I don't see it as a prior a thing in need of a large part of our attention. Rationality is universal, however not in the sense of being equally universally valued in different cultures but certainly universally effective (rationalists should win). One should certainly strive to keep a site dedicated to refining the art free of unnecessary additional barriers to other people. I think we should eventually translate many articles into Hindi, Japanese, Chinese, Arab, German, Spanish, Russian and French. However its ridiculous to imagine that our demographics will somehow come to resemble and match a socio-economic adjusted mix of unspecified ethnicities that you seem to hunt for after we eliminate all such barriers. I assure you White Westerners have their very very insane spots, we deal with them constantly, but God for starters isn't among them, look at GSS or various sources on Wikipedia and consider how much more a thought stopper and a boo light atheism is for a large part of the world, what should the existing population of LessWrong do? Refrain from bashing theism? This might incur down votes, but Westerners did come up with the scientific method and did contribute disproportionately to the fields of statistics and mathematics, is it so unimaginable that developed world (Iceland, Italy, Switzerland, Finland, America, Japan, Korea, Singapore, Taiwan ect.) and their majori
1datadataeverywhere
If you read my comment, you would have seen that I explicitly assume that we are not under-represented among deaf or gay people. If less than 4% of us are women, I am quite willing to call that a minority. Would you prefer me to call them an excluded group? I specifically brought up atheists as a group that we should expect to over-represent. I'm also not hunting for equal-representation among countries, since education obviously ought to make a difference. That seems like it ought to get many more boos around here than mentioning the western world as the source of the scientific method. I ascribe differences in those to cultural influences; I don't claim that aptitude isn't a factor, but I don't believe it has been or can easily be measured given the large cultural factors we have. This also doesn't bother me, for reasons similar to yours. As a friend of mine says, "we'll get gay rights by outliving the homophobes". Which groups should I pay more attention to? This is a serious question, since I haven't thought too much about it. I neglect non-neurotypicals because they are overrepresented in my field, so I tend to expect them amongst similar groups. I wasn't actually intending to bemoan anything with my initial question, I was just curious. I was also shocked when I found out that this is dramatically less diverse than I thought, and less than any other large group I've felt a sort of membership in, but I don't feel like it needs to be demonized for that. I certainly wasn't trying to do that.
4[anonymous]
But if we can't measure the cultural factors and account for them why presume a blank slate approach? Especially since there is sexual dimorphism in the very nervous and endocrine system. I think you got stuck on the aptitude, to elaborate, I'm pretty sure considering that humans aren't a very sexually dimorphous species (there are near relatives that are less however, example: Gibons), the mean g (if such a thing exists) of both men and women is probably about the same. There are however other aspects of succeeding at compsci or math than general intelligence. Assuming that men and women carrying the exactly the same mems will respond on average identically to identical situations is a extraordinary claim. I'm struggling to come up with a evolutionary model that would square this with what is known (for example the greater historical reproductive success of the average woman vs. the average man that we can read from the distribution of genes). If I was presented with empirical evidence then this would be just too bad for the models, but in the absence of meaningful measurement (by your account), why not assign greater probability to the outcome proscribed by the same models that work so well when tested by other empirical claims? I would venture to state that this case is especially strong for preferences. And if you are trying to fine tune the situations and memes that both men and women for each gender so as to to balance this, where can one demonstrate that this isn't a step away rather than toward improving pareto efficiency? And if its not, why proceed with it? Also to admit a personal bias I just aesthetically prefer equal treatment whenever pragmatic concerns don't trump it.
9lmnop
We can't directly measure them, but we can get an idea of how large they are and how they work. For example, the gender difference in empathic abilities. While women will score higher on empathy on self report tests, the difference is much smaller on direct tests of ability, and often nonexistent on tests of ability where it isn't stated to the participant that it's empathy being tested. And then there's the motivation of seeming empathetic. One of the best empathy tests I've read about is Ickes', which worked like this: two participants meet together in the room and have a brief conversation, which is taped. Then they go into separate rooms and the tape is played back to them twice. The first time, they jot down the times at which they remember feeling various emotions. The second time, they jot down the times at which they think their partner is feeling an emotion, and what it is. Then the records are compared, and each participant receives an accuracy score. When the test is run is like this, there is no difference in ability between men and women. However, a difference emerges when another factor is added: each participant is asked to write a "confidence level" for each prediction they make. In that procedure, women score better, presumably because their desire to appear empathetic (write down higher confidence levels) causes them to put more effort into the task. But where do desires to appear a certain way come from? At least partly from cultural factors that dictate how each gender is supposed to appear. This is probably the same reason why women are overconfident in self reporting their empathic abilities relative to men. The same applies to math. Among women and men with the same math ability as scored on tests, women will rate their own abilities much lower than the men do. Since people do what they think they'll be good at, this will likely affect how much time these people spend on math in future, and the future abilities they acquire. And then there'
0[anonymous]
We can't directly measure them, but we can get an idea of how large they are and how they work. For example, the gender difference in empathic abilities. While women will score higher on empathy on self report tests, the difference is much smaller on direct tests of ability, and nonexistent on tests of ability where it isn't stated to the participant that it's empathy being tested. And then there's the motivation of seeming empathetic. One of the http://onlinelibrary.wiley.com/doi/10.1111/j.1475-6811.2000.tb00006.x/abstract is Ickes', which worked like this: two participants meet together in the room and have a brief conversation, which is taped. Then they go into separate rooms and the tape is played back to them twice. The first time, they jot down the times at which they remember feeling various emotions. The second time, they jot down the times at which they think their partner is feeling an emotion, and what it is. Then the records are compared, and each participant receives an accuracy score. When the test is run is like this, there is no difference in ability between men and women. However, a difference emerges when another factor is added: each participant is asked to write a "confidence level" for each prediction they make. In that procedure, women score better, presumably because the their desire to appear empathetic causes them to put more effort into the task. But where do desires to appear a certain way come from? At least partly from cultural factors that dictate how each gender is supposed to appear. This is probably the same reason why women are overconfident in their empathy abilities relative to men. The same applies to math. Among women and men with the same math ability as scored on tests, women will rate their own abilities much lower than the men do. Since people do what they think they'll be good at, this will likely affect how much time these people spend on math in future, and the future abilities they acquire. And then there's priming. A
3[anonymous]
How do you know non-neurotypicals aren't over or under represented on Lesswrong as compared to the groups that you claim are overrepresented on Lesswrong compared to your field the same way you know that the groups you bemoan are lacking are under-represented relative to your field? Is it just because being neurotypical is harder to measure and define? I concede measuring who is a woman or a man or who is considered black and who is considered asian is for the average case easier than being neurotpyical. But when it comes to definition those concepts seem to be in the same order of magnitude of fuzzy as being neurotypical (sex is a less, race is a bit more). Also previously you established you don't want to compare Less wrongs diversity to the entire population of the world. I'm going to tentatively go that you also accept that academic background will affect if people can grasp or are interested in learning certain key concepts needed to participate. My question now is, why don't we crunch the numbers instead of people yelling "too many!", "too few!" or "just right!"? We know from which countries and in what numbers visitors come from, we know the educational distributions in most of them. And we know how large a fraction of this group is proficient enough English to participate meaningfully on Less wrong. This is ignoring the fact that the only data we have on sex or race is a simple self reported poll and our general impression. But if we crunch the numbers and the probability densities end up looking pretty similar from the best data we can find, well why is the burden of proof that we are indeed wasting potential on Lesswrong and not the one proposing policy or action to improve our odds of progressing towards becoming more rational? And if we are promoting our member's values, even when they aren't neutral or positive towards reaching our objectives why don't we spell them out as long as they truly are common! I'm certainly there are a few, perhaps the va
0wedrifid
Typo in a link?
2[anonymous]
I changed the first draft midway when I was still attempting to abbreviate it. I've edited and reformulated the sentence, it should make sense now.
3[anonymous]
I'm talking about the Western memplex whose members employ uses the word minority when describing women in general society. Even thought they represent a clear numerical majority. I was suspicious that you used the word minority in that sense rather than the more clearly defined sense of being a numerical minority. Sometimes when talking about groups we can avoid discussing which meaning of the word we are employing. Example: Discussing the repression of the Mayan minority in Mexico. While other times we can't do this. Example: Discussing the history and current relationship between the Arab upper class minority and slavery in Mauritania. Ah, apologies I see I carried it over from here: You explicitly state later that you are particularly interested in this axis of diversity Perhaps this would be more manageable if looked at each of the axis of variability that you raise talk about it independently in as much as this is possible? Again, this is why I previously got me confused by speaking of "groups we usually consider adding diversity", are there certain groups that are inherently associated with the word diversity? Are we using the word diversity to mean something like "proportionate representation of certain kinds of people in all groups" or are we using the world diversity in line with infinite diversity in Infinite combinations where if you create a mix of 1 part people A and 4 parts people B and have them coexist and cooperate with another one that is 2 part people A and 3 parts people B, where previously all groups where of the first kind, creating a kind of metadiversity (by using the word diversity in its politically charged meaning)? Then why aren you hunting for equal representation on LW between different groups united in a space as arbitrary as one defined by borders? While many important components of the modern scientific method did originate among scholars in Persian and Iraq in the medieval era, its development over the past 700 years has
1wedrifid
Given new evidence from the ongoing discussion I retract my earlier concession. I have the impression that the bottom line preceded the reasoning.
3datadataeverywhere
I expected your statement to get more boos for the same reason that you expected my premise in the other discussion to be assumed because of moral rather than evidence-based reasons. That is, I am used to other members of your species (I very much like that phrasing) to take very strong and sudden positions condemning suggestions of inherent inequality between the sexes, regardless of having a rational basis. I was not trying to boo your statement myself. That said, I feel like I have legitimate reasons to oppose suggestions that women are inherently weaker in mathematics and related fields. I mentioned one immediately below the passage you quoted. If you insist on supporting that view, I ask that you start doing so by citing evidence, and then we can begin the debate from there. At minimum, I feel like if you are claiming women to be inherently inferior, the burden of proof lies with you. Edit: fixed typo
6Will_Newsome
Mathematical ability is most remarked on at the far right of the bell curve. It is very possible (and there's lots of evidence to support the argument) that women simply have lower variance in mathematical ability. The average is the same. Whether or not 'lower variance' implies 'inherently weaker' is another argument, but it's a silly one. I'm much too lazy to cite the data, but a quick Duck Duck Go search or maybe Google Scholar search could probably find it. An overview with good references is here.
4[anonymous]
Is mathematical ability a bell curve? My own anecdotal experience has been that women are rare in elite math environments, but don't perform worse than the men. That would be consistent with a fat-tailed rather than normal distribution, and also with higher computed variance among women. Also anecdotal, but it seems that when people come from an education system that privileges math (like Europe or Asia as opposed to the US) the proportion of women who pursue math is higher. In other words, when you can get as much social status by being a poly sci major as a math major, women tend not to do math, but when math is very clearly ranked as the "top" or "most competitive" option throughout most of your educational life, women are much more likely to pursue it.
5Will_Newsome
I have no idea; sorry, saying so was bad epistemic hygiene. I thought I'd heard something like that but people often say bell curve when they mean any sort of bell-like distribution. I'm left confused as to how to update on this information... I don't know how large such an effect is, nor what the original literature on gender difference says, which means that I don't really know what I'm talking about, and that's not a good place to be. I'll make sure to do more research before making such claims in the future.
2datadataeverywhere
I'm not claiming that there aren't systematic differences in position or shape of the distribution of ability. What I'm claiming is that no one has sufficiently proved that these differences are inherent. I can think of a few plausible non-genetic influences that could reduce variance, but even if none of those come into play, there must be others that are also possibilities. Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent, but also why I believe that this is such a difficult task?
2wedrifid
Either because you don't understand how bayesian evidence works or because you think the question is social political rather than epistemic. That was the point of making the demand. You cannot change reality by declaring that other people have 'burdens of proof'. "Everything is cultural" is not a privileged hypothesis.
0Perplexed
It might have been marginally more productive to answer "No, I don't see. Would you explain?" But, rather than attempting to other-optimize, I will simply present that request to datadataeverywhere. Why is the placement of "burden" important? With this supplementary question: Do you know of evidence strongly suggesting that different cultural norms might significantly alter the predominant position of the male sex in academic mathematics? I can certainly see this as a difficult task. For example, we can imagine that fictional rational::Harry Potter and Hermione were both taught as children that it is ok to be smart, but that only Hermione was instructed not to be obnoxiously smart. This dynamic, by itself, would be enough to strongly suppress the numbers of women to rise to the highest levels in math. But producing convincing evidence in this area is not an impossible task. For example, we can empirically assess the impact of the above mechanism by comparing the number of bright and very bright men and women who come from different cultural backgrounds. Rather than simply demanding that your interlocutor show his evidence first, why not go ahead and show yours?
1datadataeverywhere
I agree, and this was what I meant. Distinguishing between nature and nurture, as wedrifid put it, is a difficult but not impossible task. I hope I answered both of these in my comment to wedrifid below. Thank you for bothering to take my question at face value (as a question that requests a response), instead of deciding to answer it with a pointless insult.
-2[anonymous]
The problem with other-optimising here is that it doesn't account for my goals. I care far more about the nature of rational evidence than I do about the drawn out nature vs nurture debates. A direct denunciation of the epistemic rational failure mode of passing the 'proof' buck suits my purposes.
1datadataeverywhere
Actually, it would have been more productive, since you obviously didn't understand what I was saying. I am not claiming that I have evidence suggesting that culture is a stronger factor in mathematical ability than genetics. What I'm claiming is that I don't know of any evidence to show that the two can be clearly distinguished. Ignorance is a privileged hypothesis. Unless you can show evidence of differences in mathematical ability that can be traced specifically to genetics, ignorance reigns here, and we shouldn't assume that either culture or genetics is a stronger factor. The burden of proof lies on you, because you are appealing to me to shift my belief toward yours. I am willing to do this, provided you provide any evidence that does so under a sane framework for reasoning. Meanwhile, the reason the burden of proof is not on me is that I am claiming ignorance, not a particular position. You're being incredibly critical, and have been so in other threads as well. I realize that this is your M.O., and is not solely directed at me, but I would appreciate it if you would specify exactly what I've said, here or in other comments, that has convinced you so thoroughly that I am unable to hold a rational discussion.
3wedrifid
No, I rejected your specific argument because it was by very nature fallacious. There are other things you could have said but didn't and those things I may not have even disagreed with. The conversation was initiated by you admonishing others. You have since then danced the dance of re-framing with some skill. I was actually only at the fringes of the conversation. I haven't said that. Specifics quotations of arguments or reasoning that I reject tend to be included in my comments. Take the above for example. Your reply does not relate rationally to the quote you were replying to. I reject the argument that you were using (which is something I do consistently - I care about bullshit probably even more than you care about supporting your culture hypothesis). Your response was to weasel your way out of your argument, twist your initial claim such that it has the intellectual high ground, label my disagreement with you a personal flaw, misrepresented my claim to be something that I have not made and then attempt to convey that I have not given any explanation for my position. That covers modules 1, 2, 3 and 4 in "Effective Argument Techniques 101". I don't especially mind the slander but it is essentially futile for me to try to engage with the reasoning. I would have to play the kind of games that I come here to avoid.
-1Perplexed
Well, I had promised you a compliment when you deleted a post. So, well done! I'm glad you got rid of that turkey (the great-grandparent).
0wedrifid
Was that the Joan of Arc reference? I've been studying these sexual related genetic mutations and chromosomal abnormalities recently in a Biology class and her name came up. I found it fascinating and nearly left the comment there just for that. Each to their own. :)
3Perplexed
Maybe it was the Joan comment. I can't find it now. That Joan comment annoyed me too, though I didn't say anything at the time. Not your fault, but just let a woman do something remarkable, something almost miraculous, and sure enough, some man 500 years later is going to claim that she must have actually been male, genetically speaking. I wasn't feminist at all until I came here to LW. Honest!
0wedrifid
She is a woman, regardless of whether she has a Y chromosome. It is SRY gene that matters genetically. So we can use that observation to free us up to call evidence evidence without committing crimes against womankind. If I my (most decidedly female) lecturer is to be believed the speculation was based primarily on personal reports from her closest friends. It included things like menstrual patterns (and the lack thereof) and personal habits. I didn't look into the details to see whether or not the this was an allusion to the typically far shorter vagina becoming relevant. I'm also not sure if the line of reasoning was prompted by some historian trying to work out what on earth was going on while researching her personal life or just biologists liking to feel like their knowledge is relevant to impressive people and events. If she hadn't done famous things then we probably wouldn't have any records whatsoever to go on and nor would anyone care to look.
-1datadataeverywhere
You're starting to sound like a troll. I would feel less sure of that if you hadn't just admitted that you don't expect to care what you're arguing about in another comment. What do you want out of this discussion? Personally, I would like to be better informed about an area that smart people disagree with me on. You're not helping me attain that goal, since you are providing me with no evidence. Meanwhile, you are continuing to hold a hostile tone and expecting me to support positions I neither hold nor claim to hold. If you have an actual interest in either the topic of this discussion or working with me to fix whatever it is that has sent up so many red flags with you, I'd appreciate it. I don't feel like I'm guilty of any of the things you mentioned, but if you feel adamantly that I am, I'm happy to listen to specifics so that I can evaluate and fix that behavior. If instead you feel merely like insulting me, I urge you to make better use of your time.
-1wedrifid
It is my policy to remove comments whenever social aggressors find them to be useful to take out of context and have done so with the subject of your link, assuring Resgui that it was nothing to do with him. In that discussion Relsqui and I came to an amicable agreement to disagree. He (if he'll pardon the assumption of gender and chastise me if I have made an incorrect inference) had already made some hints in that direction in the ancestor and acknowledging that I too didn't think such a trivial matter of word definition was really worth arguing about is a gesture of respect. (Some people find it annoying if the other person leaves them hanging, especially if they had offered to extend the discussion mostly as a gesture of goodwill, which is what I had taken from Relsqui.) I'll note that whatever you may think of me personally a distinguishing feature of trolls is that they enjoy provoking an emotional response in others while on the other hand I find it unsavoury. Even though I have actively developed myself in order to have a thicker 'emotional skin' (see related concurrent discussion) with when it comes to frustrations this sort of conflict will always be a net psychological drain. My goal was to support Will's comment in the face of a reply that I would have found frustrating and was also an error in reasoning. In the future I will reply directly to Will (or whomever), expressing agreement and elaborating on the point with more details. Replying to the undesired comment gave more attention to it rather than less and obscuration would perhaps have been more useful than rebuttal.
5Relsqui
For what it's worth, it is very hard to distinguish between someone who is deliberately provoking a negative reaction and someone who is not very practiced at anticipating what choices of language or behavior might cause one. I, like datadataeverywhere, did get the impression that you were at least one of those things; off the top of my head, here are a few specific reasons: * Your initial comment disagreed with my terminology without actually addressing it directly, merely asserting that I was wrong without providing evidence nor argument. This struck me as aggressive and also poorly reasoned. * You persisted in the argument about definition despite, as you later said, not caring about it. I did not continue that thread out of goodwill but out of a desire to resolve the disagreement and return to the original topic--hence stopping and checking in that we were on the same page. That's why it annoyed me when you said you didn't care; in that case, I wish we hadn't wasted the time on it! * Applying the label "social aggressor" in response to someone who is explicitly trying to find out what's going on in the conversation and steer it somewhere useful. (In fairness, dde suggesting you're a troll was not necessary either, but the situations are different in that I have not noticed you specifically trying to get the conversation on track.) * Not answering direct questions, especially when they are designed to return the conversation to a productive topic. I hope I'm not overstepping my bounds by spelling this out; my impression of the LW community is that constructive criticism is encouraged. Therefore, I'm giving you specific suggestions to avoid making a negative impression you seem to not want to make. Conveniently, this will also resolve the ambiguity in my first (non-quoted) sentence in this comment. If you confirm that you want to avoid garnering negative reactions in conversation, it'll be clear that you are indeed not a troll.
6wedrifid
Absolutely not. In general people overestimate the importance of 'intrinsic talent' on anything. The primary heritable component of success in just about anything is motivation. Either g or height comes second depending on the field.
3datadataeverywhere
I agree. I think it is quite obvious that ability is always somewhat heritable (otherwise we could raise our pets as humans), but this effect is usually minimal enough to not be evident behind the screen of either random or environmental differences. I think this applies to motivation as well! And that was really what my claim was; anyone who claims that women are inherently less able in mathematics has to prove that any measurable effect is distinguishable from and not caused by cultural factors that propel fewer women to have interest in mathematics.
1wedrifid
It doesn't. (Unfortunately.)
1datadataeverywhere
Am I misunderstanding, or are you claiming that motivation is purely an inherited trait? I can't possibly agree with that, and I think even simple experiments are enough to disprove that claim.
4wedrifid
Misunderstanding. Expanding the context slightly: It doesn't. (Unfortunately.) When it comes to motivation the differences between people are not trivial. When it comes the particular instance of difference between the sexes there are powerful differences in motivating influences. Most human motives are related to sexual signalling and gaining social status. The optimal actions to achieve these goals is significantly different for males and females, which is reflected in which things are the most motivating. It most definitely should not be assumed that motivational differences are purely cultural - and it would be astonishing if they were.
2datadataeverywhere
Are you speaking from an evolutionary context, i.e. claiming that what we understand to be optimal is hardwired, or are you speaking to which actions are actually perceived as optimal in our world? You make a really good point---one I hadn't thought of but agree with---but since I don't think that we behave strictly in a manner that our ancestors would consider optimal (after all, what are we doing at this site?), I can't agree that sexual and social signaling's effect on motivation can be considered a-cultural.
3Emile
I may be wrong, but I don't expect the proportion of gays in LessWrong to be very different from the proportion in the population at large.
7thomblake
My vague impression is that the proportion of people here with sexual orientations that are not in the majority in the population is higher than that of such people in the population. This is probably explained completely by Lw's tendency to attract weirdos people who are willing to question orthodoxy.
0[anonymous]
For starters we have a quite a few people who practice polyamory.
-1datadataeverywhere
It might matter whether or not one counts closeted gays. Either way, I was just throwing another potential partition into the argument. I also doubt that we differ significantly in our proportion of deaf people, but the point is that being deaf is qualitatively different, but shouldn't impair one's rational capabilities. Same for being female, black, or most of the groups that we think of as adding to diversity.
3[anonymous]
To little memetic diversity is clearly a bad thing, for the same reason too little genetic variability. However how much and what kind are optimal depends on the environment. Also have you considered the possibility that diversity for you is not a means to an end but a value in itself? In that case unless it conflicts with more any other values you would perhaps consider more important values you don't need any justification for it. I'm quite honest with myself that I hope that post-singularity the universe will not be paperclipped by only things I and people like me (or humans in general for that matter) value. I value a diverse universe. Edit: I.. uhm...see. At first I was very confused by all the far reaching implications of this however thanks to keeping a few things in mind, I'm just going to ascribe this to you being from a different cultural background than me.
2datadataeverywhere
Diversity is a value for me, but I'd like to believe that is more than simply an aesthetic value. Of course, if wishes were horses we'd all be eating steak. Memetic diversity is one of the non-aesthetic arguments I can imagine, and my question is partially related to that. Genetic diversity is superfluous past a certain point, so it seems reasonable that the same might be true of memetic diversity. Where is that point relative to where Less Wrong sits? Um, all I was saying was that women and black people are underrepresented here, but that ought not be explained away by the subject matter of Less Wrong. What does that have to do with my cultural background or the typical mind fallacy? What part of that do you disagree with?
7[anonymous]
Well I will try to elaborate. After I read this it struck me that you may value a much smaller space of diversity than I do. And that you probably value the very particular kinds of diversity (race, gender,some types of culture) much more or even perhaps to the exclusion of others (non-neurotypical, ideological and especially values). I'm not saying you don't (I can't know this) or that you should. I at first assumed you thought the way you do because you came up with a system more or less similar to my own, a incredibly unlikely event, that is why I scolded myself for employing the mind projection fallacy while providing a link pointing that this particular component is firmly integrated into the whole "stuff White people like" (for lack of a better word) culture that exists in the West so anyone I encounter online with whom I share the desire for certain spaces of diversity is on average overwhelmingly more likely to get it from that memplex. Also while I'm certainly sympathetic about hoping one's values are practical, but one needs to learn to live with the possibility one's values are neutral or even impractical or perhaps conflicting with each other. I overall in principle support efforts to lower unnecessary barriers for people to join Lesswrong.But the OP doesn't seem to make it explicit that this is about values, and you wanting other Lesswrongers to live by your values but seems to communicate that its about it being the optimal course of improving rationality. You haven't done this. Your argument so far has been to simply go from: "arbitrary designated group/blacks/women are capable of rationality, but are underrepresented on Lesswrong" to "Lesswrong needs to divert some (as much as needed?) efforts to correct this." Why? Like I said lowering unnecessary barriers (actually you at this point even have to make the case that they exist and that they aren't simply the result of the other factors I described in the post) won't repel the people who alrea

Konkvistador:

After I read this it struck me that you may value a much smaller space of diversity than I do. And that you probably value the very particular kinds of diversity (race, gender,some types of culture) much more or even perhaps to the exclusion of others (non-neurotypical, ideological and especially values).

There is a fascinating question that I've asked many times in many different venues, and never received anything approaching a coherent answer. Namely, among all the possible criteria for categorizing people, which particular ones are supposed to have moral, political, and ideological relevance? In the Western world nowadays, there exists a near-consensus that when it comes to certain ways of categorizing humans, we should be concerned if significant inequality and lack of political and other representation is correlated with these categories, we should condemn discrimination on the basis of them, and we should value diversity as measured by them. But what exact principle determines which categories should be assigned such value, and which not?

I am sure that a complete and accurate answer to this question would open a floodgate of insight about the modern society.... (read more)

3NancyLebovitz
That's intriguing. Would you care to mention some of the sorts of diversity which usually aren't on the radar?
3AdeleneDawner
I've spent some time thinking about this, and my conclusion is that, at least personally, what I value about diversity is the variety of worldviews that it leads to. This does result in some rather interesting issues, though. For example, one of the major factors in the difference in worldview between dark-skinned Americans and light-skinned Americans is the existence of racism, both overt and institutional. Thus, if I consider diversity to be very valuable, it seems that I should support racism. I don't, though - instead, I consider that the relevant preferences of dark-skinned Americans take precedence over my own preference for diversity. (Similarly, left-handed peoples' preference for non-abusive writing education appropriately took precedence over the cultural preference for everyone to write with their right hands, and left-handedness is, to the best of my knowledge, no longer a significant source of diversity of worldview.) That assumes coherence in the relevant group's preference, though, which isn't always the case. For example, among people with disabilities, there are two common views that are, given limited resources, significantly conflicting: The view that disabilities should be cured and that people with disabilities should strive to be (or appear to be) as normal as possible, and the view that disabilities should be accepted and that people with disabilities should be free to focus on personal goals rather than being expected to devote a significant amount of effort to mitigating or hiding their disabilities. In such cases, I support the preference that's more like the latter, though I do prefer to leave the option open for people with the first preference to pursue that on a personal level (meaning I'd support the preference 'I'd prefer to have my disability cured', but not 'I'd prefer for my young teen's disability to be treated even though they object', and I'm still thinking about the grey area in the middle where such things as 'I'd prefer for

With your first example, I think you're on to an important politically incorrect truth, namely that the existence of diverse worldviews requires a certain degree of separation, and "diversity" in the sense of every place and institution containing a representative mix of people can exist only if a uniform worldview is imposed on all of them.

Let me illustrate using a mundane and non-ideological example. I once read a story about a neighborhood populated mostly by blue-collar folks with a strong do-it-yourself ethos, many of whom liked to work on their cars in their driveways. At some point, however, the real estate trends led to an increasing number of white collar yuppie types moving in from a nearby fancier neighborhood, for whom this was a ghastly and disreputable sight. Eventually, they managed to pass a local ordinance banning mechanical work in front yards, to the great chagrin of the older residents.

Therefore, when these two sorts of people lived in separate places, there was on the whole a diversity of worldview with regards to this particular issue, but when they got mixed together, this led to a conflict situation that could only end up with one or another view being imposed on everyone. And since people's worldviews manifest themselves in all kinds of ways that necessarily create conflict in case of differences, this clearly has implications that give the present notion of "diversity" at least a slight Orwellian whiff.

3wedrifid
My experience is similar. Even people that are usually extremely rational go loopy. I seem to recall one post there that specifically targeted the issue. But you did ask "what basis should" while Robin was just asserting a controversial is.
3Vladimir_M
wedrifid: I probably didn't word my above comment very well. I am also asking only for an accurate description of the controversial "is." The fact is that nearly all people attach great moral importance to these issues, and what I'd like (at least for start) is for them to state the "shoulds" they believe in clearly, comprehensively, and coherently, and to explain the exact principles with which they justify these "shoulds." My above stated questions should be understood in these terms.
0wedrifid
If you are sufficiently curious you could make a post here. People will be somewhat motivated to tone down the hysteria given that you will have pre-emptively shunned it.
5datadataeverywhere
I think I'm going to stop responding to this thread, because everyone seems to be assuming I'm meaning or asking something that I'm not. I'm obviously having some problems expressing myself, and I apologize for the confusion that I caused. Let me try once more to clarify my position and intentions: I don't really care how diverse Less Wrong is. I was, however, curious how diverse the community is along various axes, and was interested in sparking a conversation along those lines. Vladimir's comment is exactly the kind of questions I was trying to encourage, but instead I feel like I've been asked to defend criticism that I never thought I made in the first place. I was never trying to say that there was something wrong with the way that Less Wrong is, or that we ought to do things to change our makeup. Maybe it would be good for us to, but that had nothing to do with my question. I was instead (trying to, and apparently badly) asking for people's opinions about whether or how our makeup along any partition --- the ones that I mentioned or others --- effect in us an inability to best solve the problems that we are interested in solving.
4[anonymous]
"Um, all I was saying was that women and black people are underrepresented here, but that ought not be explained away by the subject matter of Less Wrong. What does that have to do with my cultural background or the typical mind fallacy? What part of that do you disagree with?" To get back to basics for a moment: we don't know that women and black people are underrepresented here. Usernames are anonymous. Even if we suspect they're underrepresented, we don't know by how much -- or whether they're underrepresented compared to the internet in general, or the geek cluster, or what. Even assuming you want more demographic diversity on LW, it's not at all clear that the best way to get it is by doing something differently on LW itself.
0[anonymous]
You highlighted this point much better than I did.
2wedrifid
"Ought"? I say it 'ought' to be explained away be the subject matter of less wrong if and only if that is an accurate explanation. Truth isn't normative.
4datadataeverywhere
Is this a language issue? Am I using "ought" incorrectly? I'm claiming that the truth of the matter is that women are capable of rationality, and have a place here, so it would be wrong (in both an absolute and a moral sense) to claim that their lack of presence is due to this being a blog about rationality. Perhaps I should weaken my statement to say "if women are as capable as men in rationality, their underrepresentation here ought not be explained away by the subject matter". I'm not sure whether I feel like I should or shouldn't apologize for taking the premise of that sentence as a given, but I did, hence my statement.
2wedrifid
Ahh, ok. That seems reasonable. I had got the impression that you had taken the premise for granted primarily because it would be objectionable if it was not true and the fact of the matter was an afterthought. Probably because that's the kind of reasoning I usually see from other people of your species. I'm not going to comment either way about the premise except to say that it is inclination and not capability that is relevant here.
0CaveJohnson
People are touchy on this. I guess its because in public discourse pointing something like this out is nearly always a call to change it.

Wow! I just lost 50 points of karma in 15 minutes. I haven't made any top level posts, so it didn't happen there. I wonder where? I guess I already know why.

3RobinZ
While katydee's story is possible (and probable, even), it is also possible that someone is catching up on their Less Wrong reading for a substantial recent period and issuing many votes (up and down) in that period. Some people read Less Wrong in bursts, and some of those are willing to lay down many downvotes in a row.
3katydee
It is possible that someone has gone through your old comments and systematically downvoted them-- I believe pjeby reported that happening to him at one point. In the interest of full disclosure, I have downvoted you twice in the last half hour and upvoted you once. It's possible that fifty other people think like me, but if so you should have very negative karma on some posts and very positive karma on others, which doesn't appear to be the case.
2Perplexed
I think you are right about the systematic downvoting. I've noticed and not minded the downvotes on my recent controversial postings. No hard feelings. In fact, no real hard feelings toward whoever gave me the big hit - they are certainly within their rights and I am certainly currently being a bit of an obnoxious bastard.
2Perplexed
And now my karma has jumped by more than 300 points! WTF? I'm pretty sure this time that someone went through my comments systematically upvoting. If that was someone's way of saying "thank you" ... well ... you are welcome, I guess. But isn't that a bit much?
1jacob_cannell
That happened to me three days ago or so after my last top level post. At the time said post was at -6 or so, and my karma was at 60+ something. Then, within a space of < 10 minutes, my karma dropped to zero (actually i think it went substantially negative). So what is interesting to me is the timing. I refresh or click on links pretty quickly. It felt like my karma dropped by more than 50 points instantly (as if someone had dropped my karma in one hit), rather than someone or a number of people 'tracking me'. However, I could be mistaken, and I'm not certain I wasn't away from my computer for 10 minutes or something. Is there some way for high karma people to adjust someone's karma? Seems like it would be useful for troll control.

Have there been any articles on what's wrong with the Turing test as a measure of personhood? (even in it's least convenient form)

In short the problems I see are: False positives, false negatives, ignoring available information about the actual agent, and not reliably testing all the things that make personhood valuable.

5Larks
This sounds pretty exhaustive.
[-][anonymous]10

I'm interested in video game design and game design in general, and also in raising the rationality waterline. I'd like to combine these two interests: to create a rationality-focused game that is entertaining or interesting enough to become popular outside our clique, but that can also effectively teach a genuinely useful skill to players.

I imagine that it would consist of one or more problems which the player would have to be rational in some particular way to solve. The problem has to be:

  • Interesting: The prospect of having to tackle the problem should

... (read more)

Did anyone here read Buckminster Fullers synergetics? And if so did understand it?

0timtyler
Hefty quantities of Synergetics seem incomprehensible to me. Fuller was trying to make himself into a mystical science guru - and Synergetics laid out his domain. There is some worthwhile material in there - though you might be better of with more recent secondary sources.
0MartinB
But which sources. The reading of his that I understood I found amazing. And i can imagine that grasping synergistic might be useful for my brain. Recommendations for reading are always welcome.
1timtyler
It depends on what aspect you are interested in. For example, I found this book pretty worthwhile: "Light Structures - Structures of Light: The Art and Engineering of Tensile Architecture" Illustrated by the Work of Horst Berger. ...and here's one of my links pages: http://pleatedstructures.com/links/
0Risto_Saarelma
Seconding this question. I found Synergetics in the local library when I was in high school, was duly impressed by Arthur C. Clarke's endorsement on the cover, but didn't understand much at all about the book. I was too young to tell if the book was obvious math crankery or not back then, but the magnum opus style of Synergetics combined with it being pretty completely ignored nowadays makes it look a lot like an earlier example of the type of book Wolfram's A New Kind of Science turned out to be. Still, I'm curious about what the big idea was supposed to be and what did people who seriously read the book thought about it. ETA: For the curious, the whole book available is online.

Question about Solomonoff induction: does anyone have anything good to say about how to associate programs with basic events/propositions/possible worlds?

0timtyler
Don't do that - instead associate programs with sensory input streams.
0utilitymonster
Ok, but how?
0timtyler
A stream of sense data is essentially equivalent to a binary stream - the associated programs are the ones that output that stream.
0utilitymonster
Still don't get it. Let's say cards are being put in front of my face, and all I'm getting is their color. I can reliability distinguish the colors here "http://www.webspresso.com/color.htm". How do I associate a sequence of cards with a string? It doesn't seem like there is any canonical way of doing this. Maybe it won't matter that much in the end, but are there better and worse ways of starting?
2timtyler
Just so: the exact representation used is usually not that critical. If as you say you are using Solomonoff induction, the next step is to compress it - so any fancy encoding scheme you use will probably be stripped right off again.
0gwern
If you really can only distinguish those 255 colors, then you could associate each color with a single unique byte, and a sequence of n cards becomes a single bitstring with n*8 bits in it. For additional flavor, add some sort of compression. This is so elementary that I must be misunderstanding you somehow.
0khafra
Good question. Unfortunately, I don't think it's possible to create a universal shortcut for "run each one, and see if you get the possible world you were aiming for," other than the well-known alternatives like AIXI-tl and MC-AIXI.

Looks like an interesting course from MIT:

Reflective Practice: An Approach for Expanding Your Learning Frontiers

Is anyone familiar with the approach, or with the professor?

The Idea

I am working on a new approach to creating knowledge management systems. An idea that I backed into as part of this work is the context principle.

Traditionally, the context principle states that a philosopher should always ask for a word's meaning in terms of the context in which it is being used, not in isolation.

I've redefined this to make it more general: Context creates meaning and in its absence there is no meaning.

And I've added the corollary: Domains can only be connected if they have contexts in common. Common contexts provide shared meani... (read more)

Over on a cognitive science blog named "Childs Play", there is an interesting discussion of theories regarding human learning of language. These folks are not Bayesians (except for one commenter who mentions Solomonoff induction), so some bits of it may make you cringe, but the blogger does provide links to some interesting research pdfs.

Nonetheless, the question about which they are puzzled regarding humans does raise some interesting questions regarding AIs, whether they be of the F persuasion or whether they are practicing uFs. The questions... (read more)

0timtyler
The questions are about a future which hasn't been written yet. So: "it depends". If you are asking what is most likely, my answers would be: machines will probably learn languages, yes there will be tests, prior knowledge-at-birth doesn't seem too important - since it can probably be picked up quickly enough - and "it depends": Humans will probably tell machines what to do in a wide range of ways - including writing code and body language - but a fair bit of it will probably be through high-level languages - at least initially. Machines will probably tell humans what they want in a similar way - but with more use of animation and moving pictures.
0LucasSloan
There are possible general AI designs that have knowledge of human language when they are first run. What is this "permitted" you speak of? All true seed AIs have the ability to learn about human languages, as human language is subset of the reality they will attempt to model, although it is not certain that they would desire to learn human language (if, say, destructive nanotech allows them to eat us quickly enough that manipulation is useless). "Object code" is a language.
1Perplexed
I guess it wasn't clear why I raised the questions. I was thinking in terms of CEV which, as I understand it, must include some dialog between an AI and the individual members of Humanity, so that the AI can learn what it is that Humanity wants. Presumably, this dialog takes place in the native languages of the human beings involved. It is extremely important that the AI understand words and sentences appearing in this dialog in the same sense in which the human interlocutors understand them. That is what I was getting at with my questions.
3LucasSloan
Nope. It must include the AIs modeling (many) humans under different conditions, including those where the "humans" are much smarter, know more and suffered less from akrasia. It would utterly counterproductive to create an AI which sat down with a human and asked em what ey wanted - the whole reason for the concept of a CEV is that humans can't articulate what we want. Even if you and the AI mean exactly the same thing by all the words you use, words aren't sufficient to convey what we want. Again, this is why the CEV concept exists instead of handing the AI a laundry list of natural language desires.
3Perplexed
Uhmm, how are the models generated/validated?
-6Perplexed

Does anyone else think it would be immensely valuable if we had someone specialized (more so than anyone currently is) at extracting trustworthy, disinterested, x-rationality-informed probability estimates from relevant people's opinions and arguments? This community already hopefully accepts that one can learn from knowing other people's opinions without knowing their arguments; Aumann's agreement theorem, and so forth. It seems likely to me that centralizing that whole aspect of things would save a ton of duplicated effort.

9Vladimir_Nesov
I don't think Aumann's agreement theorem has anything to do with taking people's opinions as evidence. Aumann's agreement theorem is about agents turning out to have been agreeing all along, given certain conditions, not about how to come to an agreement, or worse how to enforce agreement by responding to others' beliefs. More generally (as in, not about this particular comment), the mentions of this theorem on LW seem to have degenerated into applause lights for "boo disagreement", having nothing to do with the theorem itself. It's easier to use the associated label, even if such usage would be incorrect, but one should resist the temptation.
3steven0461
People sometimes use "Aumann's agreement theorem" to mean "the idea that you should update on other people's opinions", and I agree this is inaccurate and it's not what I meant to say, but surely the theorem is a salient example that implicitly involves such updating. Should I have said Geanakoplos and Polemarchakis?
3Wei Dai
I think LWers have been using "Aumann agreement" to refer to the whole literature spawned by Aumann's original paper, which includes explicit protocols for Bayesians to reach agreement. This usage seems reasonable, although I'm not sure if it's standard outside of our community. I'm not sure this is right... Here's what I wrote in Probability Space & Aumann Agreement: Is there a result in the literature that shows something closer to your "one can learn from knowing other people's opinions without knowing their arguments"?
1steven0461
I haven't read your post and my understanding is still hazy, but surely at least the theorems don't depend on the agents being able to fully reconstruct each other's evidence? If they do, then I don't see how it could be true that the probability the agents end up agreeing on is sometimes different from the one they would have had if they were able to share information. In this sort of setting I think I'm comfortable calling it "updating on each other's opinions". Regardless of Aumann-like results, I don't see how: could possibly be controversial here, as long as people's opinions probabilistically depend on the truth.
3Wei Dai
You're right, sometimes the agreement protocol terminates before the agents fully reconstruct each other's evidence, and they end up with a different agreed probability than if they just shared evidence. But my point was mainly that exchanging information like this by repeatedly updating on each other's posterior probabilities is not any easier than just sharing evidence/arguments. You have to go through these convoluted logical deductions to try to infer what evidence the other guy might have seen or what argument he might be thinking of, given the probability he's telling you. Why not just tell each other what you saw or what your arguments are? Some of these protocols might be useful for artificial agents in situations where computation is cheap and bandwidth is expensive, but I don't think humans can benefit from them because it's too hard to do these logical deductions in our heads. Also, it seems pretty obvious that you can't offload the computational complexity of these protocols onto a third party. The problem is that the third party does not have full information of either of the original parties, so he can't compute the posterior probability of either of them, given an announcement from the other. It might be that a specialized "disagreement arbitrator" can still play some useful role, but I don't see any existing theory on how it might do so. Somebody would have to invent that theory first, I think.
3Perplexed
They don't necessarily reconstruct all of each other's evidence, just the parts that are relevant to their common knowledge. For example, two agents have common priors regarding the contents of an urn. Independently, they sample from the urn with replacement. They then exchange updated probabilities for P(Urn has Freq(red)<Freq(black)) and P(Urn has Freq(red)<0.9*Freq(black)). At this point, each can reconstruct the sizes and frequencies of the other agent's evidence samples ("4 reds and 4 blacks"), but they cannot reconstruct the exact sequences ("RRBRBBRB"). And they can update again to perfect agreement regarding the urn contents. Edit: minor cleanup for clarity. At least that is my understanding of Aumann's theorem.
2steven0461
That sounds right, but I was thinking of cases like this, where the whole process leads to a different (worse) answer than sharing information would have.
2Perplexed
Hmmm. It appears that in that (Venus, Mars) case, the agents should be exchanging questions as well as answers. They are both concerned regarding catastrophe, but confused regarding planets. So, if they tell each other what confuses them, they will efficiently communicate the important information. In some ways, and contrary to Jaynes, I think that pure Bayesianism is flawed in that it fails to attach value to information. Certainly, agents with limited communication channel capacity should not waste bandwidth exchanging valueless information.
0timtyler
That comment leaves me wondering what "pure Bayesianism" is. I don't think Bayesianism is a recipe for action in the first place - so how can "pure Bayesianism" be telling agents how they should be spending their time?
2Perplexed
By "pure Bayesianism", I meant the attitude expressed in Chapter 13 of Jaynes, near the end in the section entitled "Comments" and particularly the subsection at the very end entitled "Another dimension?". A pure "Jaynes Bayesian" seeks the truth, not because it is useful, but rather because it is truth. By contrast, we might consider a "de Finetti Bayesian" who seeks the truth so as not to lose bets to Dutch bookies, or a "Wald Bayesian" who seeks truth to avoid loss of utility. The Wald Bayesian clearly is looking for a recipe for action, and the de Finetti Bayesian seeks at least a recipe for gambling.
1timtyler
A truth seeker! Truth seeking is certainly pretty bizarre and unbiological. Agents can normally be expected to concentrate on making babies - not on seeking holy grails.
-2[anonymous]
It tells them everything. That includes inferences right down to their own cognitive hardware and implications thereof. Given that the very meaning of 'should' can be reduced down to cognitions of the speaker Bayesian reasoning is applicable.
-3timtyler
Hi! As brief feedback, I was trying to find out what "pure Bayesianism" was being used to mean - so this didn't help too much.
3MBlume
for an ideal Bayesian, I think 'one can learn from X' is categorically true for all X....
1Stuart_Armstrong
You have to also be able to deduce how much of the other agent's information is shared with you. If you and them got your posteriors by reading the same blogs and watching the same TV shows, then this is very different from the case when you reached the same conclusion from completely different channels.
3Mitchell_Porter
Somewhere in there is a joke about the consequences of a sedentary lifestyle.
0Vladimir_Nesov
The theorem doesn't involve any updating, so it's not a salient example in discussion of updating, much less proxy for that. To answer literally, simply not mentioning the theorem would've done the trick, since there didn't seem to be a need for elaboration.
0timtyler
For other people's opinions, perhaps see: http://www.takeonit.com/
0JohnDavidBustard
I'm not sure about having a centralised group doing this but I did experiment with making a tool that could help infer consequences from beliefs. Imagine something a little like this but with chains of philosophical statements that have degrees of confidence. Users would assign confidence to axioms and construct trees of argument using them. The system would automatically determine confidences of conclusions. It could even exist as a competitive game with a community determining confidence of axioms. It could also be used to rapidly determine differences in opinion i.e. infer the main inferred points of contention based on different axiom weightings. If anyone knows of anything similar or has suggestions for such a system I'd love to hear them. Including any reasons why it might fail. Because I think it's an interesting solution to the 'how to efficiently debate reasonably'.

Is there a rough idea of how the development of AI will be achieved. I.e. something like the whole brain emulation roadmap? Although we can imagine a silver bullet style solution, AI as a field seems stubbornly gradual. When faced with practical challenges, AI development follows the path of much of engineering, with steady development of sophistication and improved results, but few leaps. As if the problem itself is a large collection of individual challenges whose solution requires masses of training data and techniques that do not generalise well.

That ... (read more)

6rwallace
Your assessment is along the right lines, though if anything a little optimistic; uploading is an enormously difficult engineering challenge, but at least we can see in principle how it could be done, and recognize when we are making progress, whereas with AI we don't yet even have a consensus on what constitutes progress. I'm personally working on AI because I think that's where my talents can be best used, and I think it can deliver useful results well short of human equivalence, but if you figure you'd rather work on uploading, that's certainly a reasonable choice. As for what uploads will do if and when they come to exist, well, there's going to be plenty of time to figure that out, because the first few of them are going to spend the first few years having conversations like, "Uh... a hatstand?" "Sorry Mr. Jones, that's actually a picture of your wife. I think we need to revert yesterday's bug fixes to your visual cortex." But e.g. The Planck Dive is a good story set in a world where that technology is mature enough to be taken for granted.
6sketerpot
The phrase "Fork me on GitHub" has just taken on a more sinister meaning.
1timtyler
I expect that prediction will be probably cracked first.
0JohnDavidBustard
Thanks for the link, very interesting.
1Houshalter
Emulating an entire brain, and finding out how the higher intelligence parts work and adapting them for practical purposes, are two entirely different achievements. Even if you could upload a brain onto your computer and let it run, it would be absurdly slow, however, simulating some kind of new optimization process we find from it might be plausible. And either way, don't expect a singularity anytime soon with that. Scientists believe it took thousands of years after modern intelligence emerged for us to learn symbolic thought. Then thousands more before we discovered the scientific method. It's only now we are finally discovering rational thinking. Maybe an AI could start where we left off, or maybe it would take years before it could even get to the level to be able to do that, and then years more before it could make the jump to the next major improvement, assuming there even is one. I'm not arguing against AI here at all. I believe a singularity will probably happen and soon, but Emulation is definitely not the way to go. Humans have way to many flaws we don't even know would be possible to fix, even if we knew what the problem was in the first place. What is the ultimate goal in the first place? To do something along the lines of replicating brains of some of the most intelligent people and forcing them to work on improving humanity/developing AI? Has anyone considered there is a far more realistic way of doing this through cloning, eugenics, education research, etc. Of course no one would do it because it is amoral, but then again, what is the difference between the two?
0JohnDavidBustard
The question of the ultimate goal is a good one. I don't find arguments of value based on utilitarian values to be very convincing. In contrast I prefer enlightened self interest (other people are important because I like them and feel safe in a world where they are valued). So for me, some form of immortality is much more important than my capabilities (or something else's in the case of AI) in that state. In addition, the efficiency gains of being able to 'step through' a simulation of a system and the ability to perform repeatable automated experiments on such a system, convey enormous benefits (arguably this capability is what is driving our increasing productivity) so being able to simulate the brain may well lead to exponential improvements in our understanding of psychology and conciousness. In terms of performance concerns, there is the potential for a step change in the economics of high performance computing, while you may only be willing to spend a couple of thousand dollars on a computer to play games with, you may well take out a (lifetime?) mortgage to ensure you don't die. In terms of social consequences one could imagine that the world economy would switch from supporting biology to supporting technology (it would be interesting to calculate the relative economic cost of supporting a simulated person rather than a biological one). Recent work with brain machine interfaces also points towards the enormous flexibility of the mind to adapt to new inputs and outputs. With the improved debugging capability of simulation, mental enhancement becomes substantially more feasible. As our understanding of such interactions improve, a virtual environment could be created which convincingly provides the illusion of a world of limitless abundance. And then there is the possibility of replication, storing a person in a willing state and reseting them to that state after they complete a task. This leads to the enormous social consequence of convincingly disprovi
[-]Cyan00

Request: someone make a fresh open thread, and someone else make a rationality thread. I'd do it myself, but I've already done one of each this year; each kind of thread is usually good for two or three karma, and it wouldn't be fair.

3JGWeissman
With the new discussion section, do we really need these recurring threads?
6NancyLebovitz
I don't know. Open threads strike me as a better structure for conversation.
4Cyan
Probably not the open thread, but I'd like the tradition of monthly rationality quotes threads to continue.
2whpearson
Personally I don't care about karma much, you can have my slice of the karma pie. Perhaps put a note reminding other people that they can post them.

Shangri-La dieters: So I just recently started reading through the archives of Seth Roberts' blog, and it looks like there's tons of benefits of getting 3 or so tablespoons of flax seed oil a day (cognitive performance, gum health, heart health, etc.). That said, it also seems to reduce appetite/weight, neither of which I want. I haven't read through Seth's directory of related posts yet, but does anyone have any advice? I guess I'd be willing to set alarms for myself so that I remembered to eat, but it just sounds really unpleasant and unwieldy.

2AnnaSalamon
Perhaps add your flax seed oil to food, preferably food with notable flavors of various kinds. It's tasty that way and should avoid the tasteless calories that are supposed to be important to Shangri-La (although I haven't read about Shangri-La, so don't trust me).
1jimmy
Flaxseed oil has a strong odor. I think most people try to choke it down with their breath held to avoid the smell. It probably wouldn't count as 'flavorless calories' if you didn't. If you can't stand that, eat it with some consistent food.
0Will_Newsome
Of note is that I was recommended fish oil instead as it has a better omega-3/omega-6 ratio, so I'll probably go that route.
[-][anonymous]00

Not sure if this has been linked before, but this post about tracking your habits seems like a useful self-management technique.

NYT article on good study habits: http://www.nytimes.com/2010/09/07/health/views/07mind.html?_r=1

I don't have time to look into the sources but I am very interested in knowing the best way to learn.

David Friedman laments another misuse of frequentism.

[-][anonymous]00

I have a basic understanding of Markov Chains but I'm curious as to how they're used in artificial intelligence. My main two guesses are:

1.) They are used to make decisions (eg. Markov decision process) - By factoring in an action component to the Markov Chain you can use Markov Chains to make decisions in situations where that decision won't have a definite outcome but will instead adjust the probability of outcomes.

2.) They are used to evaluate the world (eg. Markov Chain Monte Carlo) - As the way the world develops at a high level can seem probabilistic... (read more)

1jimrandomh
For a concrete example of Markov models in AI, take a look at the Viterbi search algorithm, which is heavily used in speech and natural language recognition.
2[anonymous]
Thanks - good example.

Recently remembered this old Language Log post on the song of the Zebra Finch, though it might be relevant here. Whether or not the idea does apply to human languages, I think it's an interesting demonstration of what sort of surprising things evolution can work with. A highly constrained song is indirectly encoded by a much simpler bias in learning.

An Introduction to Probability and Inductive Logic by Ian Hacking

Have any of you read this book?

I have been invited to join a reading group based around it for the coming academic year and would like the opinions of this group as to whether it's worth it.

I may join in just for the section on Bayes. I might even finally discover the correct pronunciation of "Bayesian". ("Bay-zian" or "Bye-zian"?)

Here's a link to the book: http://www.amazon.co.uk/Introduction-Probability-Inductive-Logic/dp/0521775019/ref=sr_1_2?ie=UTF8&s=boo... (read more)

3sketerpot
I've only ever heard Bayesian pronounced "Bay-zian".
3ata
That's how I usually hear it ("Bayes"+"ian", right?), though I've also heard it pronounced like "Basian" (rhyming with "Asian") or occasionally "Bay-esian" (rhyming with "Cartesian").
[-][anonymous]00

Idea - Existential risk fighting corporates

People of normal IQ are advised to work our normal day job, the best competency that we have and after setting aside enough money for ourselves, contribute to prevention of existential risk. That is a good idea if the skills of the people here are getting their correct market value and there is such a diversity of skills that they cannot make a sensible corporation together.

Also, consider that as we make the world's corporations more agile, we bring closer the moment where an unfriendly optimization process might ... (read more)

I'd like to discuss, with anyone who is interested, the ideas of Metaphysics Of Quality, by Robert Pirsig (laid out in Lila, An enquiry into Morals)

There are many aspects to MOQ that might make a rationalist cringe, like moral realism and giving evolution a path and purpose. But there are many interesting concepts which i heard for the first time when I read MOQ. The fourfold division of inorganic, biological, social and intellectual static patterns of quality is quite intruiging. Many things that the transhumanist community talks about actually interact a... (read more)

1Snowyowl
Really? I would have arrived at the opposite conclusion. No social structure can be permanent without the biological level being fixed, therefore we should do more research into biological alteration in order to stabilize our biology should it become unstable. For instance, pre-implantation genetic diagnosis would enable us to almost eradicate most genetic diseases, thus maintaining our biological quality. I'm not saying it doesn't have corresponding problems, just that an attitude of "we should cease research in this field because we might find something dangerous" is overreacting.
0blogospheroid
I don't support Fukuyama's conclusion. I just was mentioning that Fukuyama realised that his "end of history" hypothesis was obsolete as the biological quality patterns, that he assumed were more or less unchanging, are not fixed. Genetic engineering is an intellectual + social pattern imposing on a biological pattern. By a naive reading of Pirsig, it appears as moral. But if the biological pattern is not fully understood, then it might lead to many unanticipated consequences. I definitely support the eradication of genetic diseaeses, if the changes made are those that are present in many normal people and without much downside. I support intelligence amplification, but we simply don't know enough to do it without issues. Eliezer's perspective is that humans are godshatter (a hodge podge of many biological, social and intellectual static patterns) and it will take a very powerful intelligence to understand morality and extrapolate it. I believe that thinking about Pirsig's work can inform us a little on areas we should choose to understand first.
0[anonymous]
This seems incorrect, as it's not hard to imagine a social structure supporting a wide variety of different biological/non-biological intelligences, as long as they were reasonably close to each other in morality-space. There's plenty of things at the level of biology that have no impact on morality that we'd certainly like to change.
0blogospheroid
During the process of creation of those non-biological intelligences or modification of the biological persons, the social structure would be in a flux. There will be some similarities maintained, but many changes would also be there. According to our laws, murder is illegal, but erasure of an upload with backup till the last day would not be classified as a grave crime as much as murdering an un-backed up person. These changes would be at the social level.

Does anyone else ever browse through comments, spot one and think "why is the post upvoted to 1?" and then realise that the vote was from you? I seem to do that a lot. (In nearly every case I leave the votes stand.)

5Kaj_Sotala
I don't recall ever doing that. Do you leave the votes stand because you remember/re-invent your original reason for upvoting, or because something along the lines of "well, I must've had a good reason at the time"?
0wedrifid
This one. And sometimes my surprise is because the upvoted comment is surrounded by other comments that are 'better' than it. This is I can often fix by upvoting the context instead of removing my initial upvote. (And, if I went around removing my votes I would quite possibly end up in an infinite loop of contrariness.)
[-][anonymous]00

I made this site last month: http://www.areyou1in1000000.com

Eliezer has been accused of delusions of grandeur for his belief in his own importance. But if Eliezer is guilty of such delusions then so am I and, I suspect, are many of you.

Consider two beliefs:

  1. The next millennium will be the most critical in mankind’s existence because in most of the Everett branches arising out of today mankind will go extinct or start spreading through the stars.

  2. Eliezer’s work on friendly AI makes him the most significant determinant of our fate in (1).

Let 10^N represent the average across our future Everett branches of the... (read more)

5JamesAndrix
2 is ambiguous. Getting to the stars requires a number of things to go right. Eliezer serves relatively little use in preventing a major nuclear exchange in the next 10 years, or bad nanotech , or garage made bio weapons, or even UFAI development. FAI is just the final thing that needs to go right, everything else needs to go mostly right until then.
3Snowyowl
And I can think of a few ways humanity can get to the stars even if FAI never happens.
3KevinC
Can you provide a cite for the notion that Eliezer believes (2)? Since he's not likely to build the world's first FAI in his garage all by himself, without incorporating the work of any of other thousands of people working on FAI and FAI's necessary component technologies, I think it would be a bit delusional of him to beleive (2) as stated. Which is not to suggest that his work is not important, or even among the most significant work done in the history of humankind (even if he fails, others can build on that and find the way that works). But that's different than the idea that he, alone, is The Most Significant Human Who Will Ever Live. I don't get the impression that he's that cocky.
2James_Miller
Eliezer has been accused on LW of having or possibly having delusions of grandeur for essentially believing in (2). See here: http://lesswrong.com/lw/2lr/the_importance_of_selfdoubt/ My main point is that even if Eliezer believes in (2) we can't conclude that he has such delusions unless we also accept that many LW readers also have such delusions.
3wedrifid
Really? How about "when you are, in fact, 1/10^(N-12) and have good reason to believe it"? Throwing in a large N doesn't change the fact that 10^N is still 1,000,000,000,000 times larger than 10^(N-12) and nor does it mean we could not draw conclusions about belief (2). (Not commenting on Eliezer here, just suggesting the argument is not all that persuasive to me.)
2Snowyowl
I agree. Somebody has to be the most important person ever. If Elizer really has made significant contributions to the future of humanity, he's much more likely to be that most important person than a random person out of 10^N candidates would be.
1James_Miller
The argument would be that Eliezer should doubt his own ability to reason if his reason appears to cause him to think he is 1 in 10^N. My claim is that if this argument is true everyone who believes in (1) and thinks N is large should, to an extremely close approximation, have just as much doubt in their own ability to reason as Eliezer should have in his.
1Snowyowl
Agreed. Not sure if Eliezer actually believes that, but I take your point.
2James_Miller
To an extremely good approximation one in a million events don't ever happen.
3wedrifid
To an extremely good approximation this Everett Branch doesn't even exist. Well, it wouldn't if I used your definition of 'extremely good'.
1James_Miller
Your argument seems to be analogous to the false claim that it's remarkable that a golf ball landed exactly where it did (regardless of where it did land) because the odds of that happening were extremely small. I don't think my argument is analogous because there is reason to think that being one of the most important people to ever live is a special happening clearly distinguishable from many, many others.
1gwern
Yet they are quite easy to generate - flip a coin a few times.
0timtyler
Here, here. That is a trillion times more probable!
2rwallace
It's not about the numbers, and it's not about Eliezer in particular. Think of it this way: Clearly, the development of interstellar travel (if we successfully accomplish this) will be one of the most important events in the history of the universe. If I believe our civilization has a chance of achieving this, then in a sense that makes me, as a member of said civilization, important. This is a rational conclusion. If I believe I'm going to build a starship in my garage, that makes me delusional. The problem isn't the odds against me being the one person who does this. The problem is that nobody is going to do this, because building a starship in your garage is simply impossible; it's just too hard a job to be done that way.
0Houshalter
You assume it is. But maybe you will invent AI and then use it to design a plan of how to build a starship in your garage. So it's not simply impossible. It's just unknown and even if you could theres no reason to believe that would be a good decision. But hey, in a hundred years, who knows what people will build in their garages, or the equivalent of. I immagine people a hundred years ago would believe our projects to be pretty strange.
2prase
I think I don't understand (1) and its implications. How the fact that in most of the branches we are going extinct implies that we are the most important couple of generations (this is how I interpret the trillion)? Our importance lies in our decisions. These decisions influence the number of branches in which people die out. If we take (1) as given, it means we weren't successful in mitigating the existential risk, leaving no place to excercise our decisions and thus importance.

Omega comes up to you and tells you that if you believe in science it will make your life 1000 utilons better. He then goes on to tell you that if you believe in god, it will make your afterlife 1 million utilons better. And finally, if you believe in both science and god, you won't get accepted into the afterlife so you'll only get the 1000 utilons.

If it were me, I would tell omega that he's not my real dad and go on believing in science and not believing in god.

Am I being irrational?

EDIT: if omega is an infinitely all-knowing oracle, the answer may be... (read more)

1NihilCredo
The definition of Omega includes him being completely honest and trustworthy. He wouldn't tell you "I will make your afterlife better" unless he knew that there is an afterlife (otherwise he couldn't make it better), just like he wouldn't say "the current Roman Emperor is bald". If he were to say instead "I will make your afterlife better, if you have one", I would keep operating on my current assumption that there is no such thing as an afterlife. Oh, I almost forgot - what does it even mean to "believe in science"?