Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Less Wrong: Open Thread, September 2010

3 Post author: matt 01 September 2010 01:40AM

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Comments (610)

Comment author: Kaj_Sotala 02 September 2010 09:04:37PM *  22 points [-]

It seems to me, based on purely anecdotal experience, that people in this community are unusually prone to feeling that they're stupid if they do badly at something. Scott Adams' The Illusion of Winning might help counteract becoming too easily demotivated.

Let's say that you and I decide to play pool. We agree to play eight-ball, best of five games. Our perception is that what follows is a contest to see who will do something called winning.

But I don't see it that way. I always imagine the outcome of eight-ball to be predetermined, to about 95% certainty, based on who has practiced that specific skill the most over his lifetime. The remaining 5% is mostly luck, and playing a best of five series eliminates most of the luck too.

I've spent a ridiculous number of hours playing pool, mostly as a kid. I'm not proud of that fact. Almost any other activity would have been more useful. As a result of my wasted youth, years later I can beat 99% of the public at eight-ball. But I can't enjoy that sort of so-called victory. It doesn't feel like "winning" anything.

It feels as meaningful as if my opponent and I had kept logs of the hours we each had spent playing pool over our lifetimes and simply compared. It feels redundant to play the actual games.

I see the same thing with tennis, golf, music, and just about any other skill, at least at non-professional levels. And research supports the obvious, that practice is the main determinant of success in a particular field.

As a practical matter, you can't keep logs of all the hours you have spent practicing various skills. And I wonder how that affects our perception of what it takes to be a so-called winner. We focus on the contest instead of the practice because the contest is easy to measure and the practice is not.

Complicating our perceptions is professional sports. The whole point of professional athletics is assembling freaks of nature into teams and pitting them against other freaks of nature. Practice is obviously important in professional sports, but it won't make you taller. I suspect that professional sports demotivate viewers by sending the accidental message that success is determined by genetics.

My recommendation is to introduce eight-ball into school curricula, but in a specific way. Each kid would be required to keep a log of hours spent practicing on his own time, and there would be no minimum requirement. Some kids could practice zero hours if they had no interest or access to a pool table. At the end of the school year, the entire class would compete in a tournament, and they would compare their results with how many hours they spent practicing. I think that would make real the connection between practice and results, in a way that regular schoolwork and sports do not. That would teach them that winning happens before the game starts.

Yes, I know that schools will never assign eight-ball for homework. But maybe there is some kid-friendly way to teach the same lesson.

ETA: I don't mean to say that talent doesn't matter: things such as intelligence matter more than Adams gives them credit for, AFAIK. But I've noticed in many people (myself included) a definite tendency to overvalue intelligence relative to practice.

Comment author: jimrandomh 03 September 2010 01:30:47PM 6 points [-]

It seems to me, based on purely anecdotal experience, that people in this community are unusually prone to feeling that they're stupid if they do badly at something.

This is certainly true of me, but I try to make sure that the positive feeling of having identified the mistakes and improved outweighs the negative feeling of having needed the improvement. Tsuyoku Naritai!

Comment author: hegemonicon 03 September 2010 03:59:53AM *  6 points [-]

people in this community are unusually prone to feeling that they're stupid if they do badly at something

I suspect this is a result of the tacit assumption that "if you're not smart enough, you don't belong at LW". If most members are anything like me, this combined with the fact that they're probably used to being "the smart one" makes it extremely intimidating to post anything, and extremely de-motivational if they make a mistake.

In the interests of spreading the idea that it's ok if other people are smarter than you, I'll say that I'm quite certainly one of the less intelligent members of this community.

I've noticed in many people (myself included) a definite tendency to overvalue intelligence relative to practice.

Practice and expertise tend to be domain-specific - Scott isn't any better at darts or chess after playing all that pool. Even learning things like metacognition tend not to apply outside of the specific domain you've learned it in. Intelligence is one of the only things that gives you a general problem solving/task completion ability.

Comment author: xax 03 September 2010 09:07:19PM 1 point [-]

Intelligence is one of the only things that gives you a general problem solving/task completion ability.

Only if you've already defined intelligence as not domain-specific in the first place. Conversely, meta-cognition about a person's own learning processes could help them learn faster in general, which has many varied applications.

Comment author: Daniel_Burfoot 03 September 2010 03:47:37AM 4 points [-]

I don't mean to say that talent doesn't matter: things such as intelligence matter more than Adams gives them credit for

I think the relative contribution of intelligence vs. practice varies substantially depending on the nature of the particular task. A key problem is to identify tasks as intelligence-dominated (the smart guy always wins) vs. practice-dominated (the experienced guy always wins).

As a first observation about this problem, notice that clearly definable or objective tasks (chess, pool, basketball) tend to be practice-dominated, whereas more ambiguous tasks (leadership, writing, rationality) tend to be intelligence-dominated.

Comment author: Kaj_Sotala 03 September 2010 08:38:20AM 2 points [-]

I think the relative contribution of intelligence vs. practice varies substantially depending on the nature of the particular task.

This is true. Intelligence research has shown that intelligence is more useful for more complex tasks, see e.g. Gottfredson 2002.

Comment author: [deleted] 09 September 2010 02:33:41AM 3 points [-]

I like this anecdote.

I never valued intelligence relative to practice, thanks to an upbringing that focused pretty heavily on the importance of effort over talent. I'm more likely to feel behind, insufficiently knowledgeable to the point that I'm never going to catch up. I don't see why it's necessarily a cheerful observation that practice makes a big difference to performance. It just means that you'll never be able to match the person who started earlier.

Comment author: Wei_Dai 10 September 2010 07:27:28AM *  16 points [-]

An Alternative To "Recent Comments"

For those who may be having trouble keeping up with "Recent Comments" or finding the interface a bit plain, I've written a Greasemonkey script to make it easier/prettier. Here is a screenshot.

Explanation of features:

  • loads and threads up to 400 most recent comments on one screen
  • use [↑] and [↓] to mark favored/disfavored authors
  • comments are color coded based on author/points (pink) and recency (yellow)
  • replies to you are outlined in red
  • hover over [+] to view single collapsed comment
  • hover over/click [^] to highlight/scroll to parent comment
  • marks comments read (grey) based on scrolling
  • shows only new/unread comments upon refresh
  • date/time are converted to your local time zone
  • click comment date/time for permalink

To install, first get Greasemonkey, then click here. Once that's done, use this link to get to the reader interface.

ETA: I've placed the script is in the public domain. Chrome is not supported.

Comment author: Wei_Dai 10 September 2010 08:35:57AM 4 points [-]

Here's something else I wrote a while ago: a script that gives all the comments and posts of a user on one page, so you can save them to a file or search more easily. You don't need Greasemonkey for this one, just visit http://www.ibiblio.org/weidai/lesswrong_user.php

I put in a 1-hour cache to reduce server load, so you may not see the user's latest work.

Comment author: DSimon 07 September 2010 08:28:02PM *  13 points [-]

I'm interested in video game design and game design in general, and also in raising the rationality waterline. I'd like to combine these two interests: to create a rationality-focused game that is entertaining or interesting enough to become popular outside our clique, but that can also effectively teach a genuinely useful skill to players.

I imagine that it would consist of one or more problems which the player would have to be rational in some particular way to solve. The problem has to be:

  • Interesting: The prospect of having to tackle the problem should excite the player. Very abstract or dry problems would not work; very low-interaction problems wouldn't work either, even if cleverly presented (i.e. you could do Newcomb's problem as a game with plenty of lovely art and window dressing... but the game itself would still only be a single binary choice, which would quickly bore the player).

  • Dramatic in outcome: The difference between success and failure should be great. A problem in which being rational gets you 10 points but acting typically gets you 8 points would not work; the advantage of applying rationality needs to be very noticeable.

  • Not rigged (or not obviously so): The player shouldn't have the feeling that the game is designed to directly reward rationality (even though it is, in a sense). The player should think that they are solving a general problem with rationality as their asset.

  • Not allegorical: I don't want to raise any likely mind-killing associations in the player's mind, like politics or religion. The problem they are solving should be allegorical to real world problems, but to a general class of problems, not to any specific problems that will raise hackles and defeat the educational purpose of the game.

  • Surprising: The rationality technique being taught should not be immediately obvious to an untrained player. A typical first session should involve the player first trying an irrational method, seeing how it fails, and then eventually working their way up to a rational method that works.

A lot of the rationality-related games that people bring up fail some of these criterion. Zendo, for example, is not "dramatic in outcome" enough for my taste. Avoiding confirmation bias and understanding something about experimental design makes one a better Zendo player... but in my experience not as much as just developing a quick eye for pattern recognition and being able to read the master's actions.

Anyone here have any suggestions for possible game designs?

Comment author: Emile 08 September 2010 10:29:11PM 8 points [-]

Note also the Wiki page, with links to previous threads (I just discovered it, and I don't think I had noticed the previous threads. This one seems better!)

One interesting game topic could be building an AI. Make it look like a nice and cutesy adventure game, with possibly some little puzzles, but once you flip the switch, if you didn't get absolutely everything exactly right, the universe is tiled with paperclips/siny smiley faces/tiny copies of Eliezer Yudkowsky. That's more about SIAI propaganda than rationality though.

One interesting thing would be to exploit the conventions of video games but make actual winning require to see through those conventions. For example, have a score, and certain actions give you points, with nice shiny feedbacks and satisfying "shling!" sounds, but some actions are vitally important but not rewarded by any feedback.

For example (to keep in the "build an AI" example), say you can hire scientists, and the scientists' profile page lists plenty of impressive certifications (stats like "experiment design", "analysis", "public speaking", etc.), and some filler text about what they did their thesis and boring stuff like that (think: stats get big Icons, and are at the top, filler text looks like boring background filler text). And once you hired the scientists, you get various bonuses (money, prestige points, experiments), but the only of those factors that's of any importance at the end of the game is whether the scientist is "not stupid", and the only way to tell that is from various tell-tale signs for "stupid" in the "boring" filler texts - For example things like (also) having a degree in theology, or having published a paper on homeopathy ... stuff that would indeed be a bad sign for a scientist, but that nothing in the game ever tells you is bad.

So basically the idea would be that the rules of the game you're really playing wouldn't be the ones you would think at first glance, which is a pretty good metaphor for real life too.

It needs to be well-designed enough so that it's not "guessing the programmer's password", but that should be possible.

Making a game around experiment design would be interesting too - have some kind of physics / chemistry / biology system that obeys some rules (mostly about transformations, not some "real" physics with motion and collisions etc.), have game mechanics that allow you to do something like experimentation, and have a general context (the feedbacks you get, what other characters say, what you can buy) that points towards a slightly wrong understanding of reality. This is bouncing off Silas' ideas, things that people say are good for you may not really be so, etc.

Here again, you can exploit the conventions of video games to mislead the player. For example, red creatures like eating red things, blue creatures like eating blue things, etc. - but the rule doesn't always hold.

Comment author: DSimon 08 September 2010 10:55:28PM *  4 points [-]

Here again, you can exploit the conventions of video games to mislead the player.

I think this is a great idea. Gamers know lots of things about video games, and they know them very thoroughly. They're used to games that follow these conventions, and they're also (lately) used to games that deliberately avert or meta-comment on these conventions for effect (i.e. Achievement Unlocked), but there aren't too many games I know of that set up convincingly normal conventions only to reveal that the player's understanding is flawed.

Eternal Darkness did a few things in this area. For example, if your character's sanity level was low, you the player might start having unexpected troubles with the interface, i.e. the game would refuse to save on the grounds that "It's not safe to save here", the game would pretend that it was just a demo of the full game, the game would try to convince you that you accidentally muted the television (though the screaming sound effects would still continue), and so on. It's too bad that those effects, fun as they were, were (a) very strongly telegraphed beforehand, and (b) used only for momentary hallucinations, not to indicate that the original understanding the player had was actually the incorrect one.

Comment author: Raemon 09 September 2010 02:39:21AM 17 points [-]

The problem is that, simply put, such games generally fail on the "fun" meter.

There is a game called "The Void," which begins with the player dying and going to a limbo like place ("The Void"). The game basically consists of you learning the rules of the Void and figuring out how to survive. At first it looks like a first person shooter, but if you play it as a first person shooter you will lose. Then it sort of looks like an RPG. If you play it as an RPG you will also lose. Then you realize it's a horror game. Which is true. But knowing that doesn't actually help you to win. What you eventually have to realize is that it's a First Person Resource Management game. Like, you're playing StarCraft from first person as a worker unit. Sort of.

The world has a very limited resource (Colour) and you must harvest, invest and utilitize Colour to solve all your problems. If you waste any, you will probably die, but you won't realize that for hours after you made the initial mistake.

Every NPC in the game will tell you things about how the world works, and every one of those NPCs (including your initial tutorial) is lying to you about at least one thing.

The game is filled with awesome flavor, and a lot of awesome mechanics. (Specifically mechanics I had imagined independently and wanted to make my own game regarding). It looked to me like one of the coolest sounding games ever. And it was amazingly NOT FUN AT ALL for the first four hours of play. I stuck with it anyway, if for no other reason than to figure out how a game with such awesome ideas could turn out so badly. Eventually I learned how to play, and while it never became fun it did become beautiful and poignant and it's now one of my favorite games ever. But most people do not stick with something they don't like for four hours.

Toying with player's expectations sounds cool to the people who understand how the toying works, but is rarely fun for the player themselves. I don't think that's an insurmountable obstacle, but if you're going to attempt to do this, you need to really fathom how hard it is to work around. Most games telegraph everything for a reason.

Comment author: PeerInfinity 09 September 2010 05:23:22PM 2 points [-]

"once you flip the switch, if you didn't get absolutely everything exactly right, the universe is tiled with paperclips/tiny smiley faces/tiny copies of Eliezer Yudkowsky."

See also: The Friendly AI Critical Failure Table

And I think all of the other suggestions you made in this comment would make an awesome game! :D

Comment author: humpolec 07 September 2010 09:00:57PM 8 points [-]

RPGs (and roguelikes) can involve a lot of optimization/powergaming; the problem is that powergaming could be called rational already. You could

  • explicitly make optimization a part of game's storyline (as opposed to it being unnecessary (usually games want you to satisfice, not maximize) and in conflict with the story)
  • create some situations where the obvious rules-of-thumb (gather strongest items, etc.) don't apply - make the player shut up and multiply
  • create situations in which the real goal is not obvious (e. g. it seems like you should power up as always, but the best choice is to focus on something else)

Sorry if this isn't very fleshed-out, just a possible direction.

Comment author: khafra 09 September 2010 11:01:37AM *  5 points [-]

I'm not sure if transformice counts as a rationalist game, but appears to be a bunch of multiplayer coordination problems, and the results seem to support ciphergoth's conjecture on intelligence levels.

Comment author: Emile 09 September 2010 12:03:46PM 2 points [-]

Transformice is awesome :D A game hasn't made me laugh that much for a long time.

And it's about interesting, human things, like crowd behaviour and trusting the "leader" and being thrust in a position of responsibility without really knowing what to do ... oh, and everybody dying in funny ways.

Comment author: steven0461 07 September 2010 09:53:11PM *  3 points [-]

Text adventures seem suitable for this sort of thing, and are relatively easy to write. They're probably not as good for mass appeal, but might be OK for mass nerd appeal. For these purposes, though, I'm worried that rationality may be too much of a suitcase term, consisting of very different groups of subskills that go well with very different kinds of game.

Comment author: SilasBarta 07 September 2010 10:11:35PM 8 points [-]

Here's an idea I've had for a while: Make it seem, at first, like a regular RPG, but here's the kicker -- the mystical, magic potions don't actually do anything that's indistinguishable from chance.

(For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you. If you think this would be too obvious, rot13: In the game Earthbound, bar vgrz lbh trg vf gur Pnfrl Wbarf ong, naq vgf fgngf fnl gung vg'f ernyyl cbjreshy, ohg vg pna gnxr lbh n ybat gvzr gb ernyvmr gung vg uvgf fb eneryl gb or hfryrff.)

Set it in an environment like 17th-century England where you have access to the chemicals and astronomical observations they did (but give them fake names to avoid tipping off users, e.g., metallia instead of mercury/quicksilver), and are in the presence of a lot of thinkers working off of astrological and alchemical theories. Some would suggest stupid experiments ("extract aurum from urine -- they're both yellow!") while others would have better ideas.

To advance, you have to figure out the laws governing these things (which would be isomorphic to real science) and put this knowledge to practical use. The insights that had to be made back then are far removed from the clean scientific laws we have now, so it would be tough.

It would take a lot of work to e.g. make it fun to discover how to use stars to navigate, but I'm sure it could be done.

Comment author: humpolec 08 September 2010 12:21:47PM *  11 points [-]

For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you.

What if instead of being useless (by having an additional cancelling effect), magical potions etc. had no effect at all? If HP isn't explicitly stated, you can make the player feel like he's regaining health (e.g. by some visual cues), but in reality he'd die just as often.

Comment author: steven0461 07 September 2010 10:41:07PM *  6 points [-]

I think in many types of game there's an implicit convention that they're only going to be fun if you follow the obvious strategies on auto-pilot and don't optimize too much or try to behave in ways that would make sense in the real world, and breaking this convention without explicitly labeling the game as competitive or a rationality test will mostly just be annoying.

The idea of having a game resemble real-world science is a good one and not one that as far as I know has ever been done anywhere near as well as seems possible.

Comment author: SilasBarta 07 September 2010 11:37:48PM 1 point [-]

Good point. I guess the game's labeling system shouldn't deceive you like that, but it would need to have characters that promote non-functioning technology, after some warning that e.g. not everyone is reliable, that these people aren't the tutorial.

Comment author: DSimon 08 September 2010 09:28:02PM *  6 points [-]

Best I think would be if the warning came implicitly as part of the game, and a little ways into it.

For example: The player sees one NPC Alex warn another NPC Joe that failing to drink the Potion of Feather Fall will mean he's at risk of falling off a ledge and dying. Joe accepts the advice and drinks it. Soon after, Joe accidentally falls off a ledge and dies. Alex attempts to rationalize this result away, and (as subtly as possible) shrugs off any attempts by the player to follow conversational paths that would encourage testing the potion.

Player hopefully then goes "Huh. I guess maybe I can't trust what NPCs say about potions" without feeling like the game has shoved the answer at them, or that the NPCs are unrealistically bad at figuring stuff out.

Comment author: SilasBarta 09 September 2010 12:48:14PM 1 point [-]

Exactly -- that's the kind of thing I had in mind: the player has to navigate through rationalizations and be able to throw out unreliable claims against bold attempts to protect it from being proven wrong.

So is this game idea something feasible and which meets your criteria?

Comment author: DSimon 09 September 2010 02:55:33PM 3 points [-]

I think so, actually. When I start implementation, I'll probably use an Interactive Fiction engine as another person on this thread suggested, because (a) it makes implementation a lot easier and (b) I've enjoyed a lot of IF but I haven't ever made one of my own. That would imply removing a fair amount of the RPG-ness in your original suggestion, but the basic ideas would still stand. I'm also considering changing the setting to make it an alien world which just happens to be very much like 17th century England except filled with humorous Rubber Forehead Aliens; maybe the game could be called Standing On The Eyestalks Of Giants.

On the particular criteria:

  • Interesting: I think the setting and the (hopefully generated) buzz would build enough initial interest to carry the player through the first frustrating parts where things don't seem to work as they are used to. Once they get the idea that they're playing as something like an alien Newton, that ought to push up the interest curve again a fair amount.

  • Not (too) allegorical: Everybody loves making fun of alchemists. Now that I think of it, though, maybe I want to make sure the game is still allegorical enough to modern-day issues so that it doesn't encourage hindsight bias.

  • Dramatic/Surprising: IF has some advantages here in that there's an expectation already in place that effects will be described with sentences instead of raw HP numbers and the like. It should be possible to hit the balance where being rational and figuring things out gets the player significant benefits (Dramatic) , but the broken theories being used by the alien alchemists and astrologists are convincing enough to fool the player at first into thinking certain issues are non-puzzles (Surprising).

  • Not rigged: Assuming the interface for modelling the game world's physics and doing experiments is sophisticated enough, this should prevent the feeling that the player can win by just finding the button marked "I Am Rational" and hitting it. However, I think this is the trickiest part programming-wise.

I'm going to look into IF programming a bit to figure out how implementable some of this stuff is. I won't and can't make promises regarding timescale or even completability, however: I have several other projects going right now which have to take priority.

Comment author: SilasBarta 09 September 2010 03:40:43PM *  1 point [-]

Thanks, I'm glad I was able to give you the kind of idea you were looking for, and that someone is going to try to implement this idea.

I'm also considering changing the setting to make it an alien world which just happens to be very much like 17th century England

Good -- that's what I was trying to get at. For example, you would want a completely different night sky; you don't want the gamer to be able to spot the Big Dipper (or Southern Cross for our Aussie friends) and then be able to use existing ephemeris (ephemeral?) data. The planet should have a different tilt, or perhaps be the moon of another planet, so the player can't just say, "LOL, I know the heliocentric model, my planet is orbiting the sun, problem solved!"

Different magnetic field too, so they can't just say, "lol, make a compass, it points north".

I'm skeptical, though, about how well text-based IF can accomplish this -- the text-only interface is really constraining, and would have to tell the user all of the salient elements explicitly. I would be glad to help on the project in any way I can, though I'm still learning complex programming myself.

Also, something to motivate the storyline would be like: You need to come up with better cannonballs for the navy (i.e. have to identify what increases a metal's yield energy). Or come up with a way of detecting counterfeit coins.

Comment author: CronoDAS 08 September 2010 10:00:19PM *  4 points [-]

To advance, you have to figure out the laws governing these things (which would be isomorphic to real science) and put this knowledge to practical use. The insights that had to be made back then are far removed from the clean scientific laws we have now, so it would be tough.

Or you could just go look up the correct answers on gamefaqs.com.

Comment author: JGWeissman 08 September 2010 10:06:46PM 2 points [-]

So the game should generate different sets of fake names for each time it is run, and have some variance in the forms of clues and which NPC's give them.

Comment author: CronoDAS 08 September 2010 10:08:56PM 4 points [-]

Ever played Nethack? ;)

Comment author: Perplexed 07 September 2010 10:44:57PM 3 points [-]

Dramatic in outcome:

One way to achieve this is to make it a level-based puzzle game. Solve the puzzle suboptimally, and you don't get to move on. Of course, that means that you may need special-purpose programming at each level. On the other hand, you can release levels 1-5 as freeware, levels 6-20 as Product 1.0, and levels 21-30 as Product 2.0.

Not allegorical:

The puzzles I am thinking of are in the field of game theory, so the strategies will include things like not cooperating (because you don't need to in this case), making and following through on threats, and similar "immoral" actions. Some people might object on ethical or political grounds. I don't really know how to answer except to point out that at least it is not a first-person shooter.

Surprising

Game theory includes many surprising lessons - particularly things like the handicap principle, voluntary surrender of power, rational threats, and mechanism design. Coalition games are particularly counter-intuitive, but, with experience, intuitively understandable.

But you can even teach some rationality lessons before getting into games proper. Learn to recognize individuals, for example. Not all cat-creatures you encounter are the same character. You can do several problems involving probabilities and inference before the second player ever shows up.

Comment author: MartinB 01 September 2010 02:37:53AM 12 points [-]

[tl;dr: quest for some specific cryo data references]

I prepare to do my own, deeper evaluation of cryogenics. For that I read through many of the case reports on the Alcor and CI page. Due to my geographic situation I am particularly interested in the ability of actually getting a body from Europe, Germany over to their respective facilities. Now the reports are quite interesting and provide lots of insight into the process, but what I still look for are the unsuccessful reports. In which cases a signed up member was not brought in due to legal interference, next of kin decisions and the likes. Is anyone aware of a detailed log of those? Also I would like to see how many of the signed clients get lost due to the circumstances of their death.

Comment author: Relsqui 16 September 2010 10:37:14PM *  8 points [-]

I'm a translator between people who speak the same language, but don't communicate.

People who act mostly based on their instincts and emotions, and those who prefer to ignore or squelch those instincts and emotions[1], tend to have difficulty having meaningful conversations with each other. It's not uncommon for people from these groups to end up in relationships with each other, or at least working or socializing together.

On the spectrum between the two extremes, I am very close to the center. I have an easier time understanding the people on each side than their counterparts do, it frustrates me when they miscommunicate, and I want to help. This includes general techniques (although there are some good books on that already), explanations of words or actions which don't appear to make sense, and occasional outright translation of phrases ("When they said X, they meant what you would have called Y").

Is this problem, or this skill, something of interest to the LW community at large? In the several days I've been here it's come up on comment threads a couple times. I have some notes on the subject, and it would be useful for me to get feedback on them; I'd like to some day compile them into a guide written for an audience much like this one. Do you have questions about how to communicate with people who think very much unlike you, or about specific situations that frustrate you? Would you like me to explain what appears to be an arbitrary point of etiquette? Anything else related to the topic which you'd like to see addressed?

In short: "I understand the weird emotional people who are always yelling at you, but I'm also capable of speaking your language. Ask me anything."


[1] These are both phrased as pejoratively as I could manage, on purpose. Neither extreme is healthy.

Comment author: beriukay 25 September 2010 10:09:37PM 2 points [-]

One issue I've frequently stumbled across is the people who make claims that they have never truly considered. When I ask for more information, point out obvious (to me) counterexamples, or ask them to explain why they believe it, they get defensive and in some cases quite offended. Some don't want to ever talk about issues because they feel like talking about their beliefs with me is like being subject to some kind of Inquisition. It seems to me that people of this cut believe that to show you care about someone, you should accept anything they say with complete credulity. Have you found good ways to get people to think about what they believe without making them defensive? Do I just have to couch all my responses in fuzzy words? Using weasel words always seemed disingenuous to me, but if I can get someone to actually consider the opposition by saying things like "Idunno, I'm just saying it seems to me, and I might be wrong, that maybe gays are people and deserve all the rights that people get, you know what I'm saying?"

Comment author: Relsqui 26 September 2010 03:05:53AM *  10 points [-]

I've been on the other side of this, so I definitely understand why people react that way--now let's see if I understand it well enough to explain it.

For most people, being willing to answer a question or identify a belief is not the same thing as wanting to debate it. If you ask them to tell you one of their beliefs and then immediately try to engage them in justifying it to you, they feel baited and switched into a conflict situation, when they thought they were having a cooperative conversation. You've asked them to defend something very personal, and then are acting surprised when they get defensive.

Keep in mind also that most of the time in our culture, when one person challenges another one's beliefs, it carries the message "your beliefs are wrong." Even if you don't state that outright--and even in the probably rare cases when the other person knows you well enough to understand that isn't your intent--you're hitting all kinds of emotional buttons which make you seem like an aggressor. This is the result of how the other person is wired, but if you want to be able to have this kind of conversation, it's in your interest to work with it.

The corollary to the implied "your beliefs are wrong" is "I know better than you" (because that's how you would tell that they're wrong). This is an incredibly rude signal to send to--well, anyone, but especially to another adult. Your hackles probably rise too when someone signals that they're superior to you and you don't agree; this is the same thing.

The point, then, is not that you need to accept what people you care about say with credulity. It's that you need to accept it with respect. You do not have any greater value than the person you're talking to (even if you are smarter and more rational), just like they don't have any greater value than you (even if they're richer and more attractive). Even if you really were by some objective measure a better person (which is, as far as I can tell, a useless thing to consider), they don't think so, and acting like it will get you nowhere.

Possibly one of the hardest parts of this to swallow is that, when you're choosing words for the purpose of making another person remain comfortable talking to you, whether their beliefs are a good reflection of reality is not actually important. Obviously they think so, and merely contradicting them won't change that (nor should it). So if you sound like you're just trying to convince them that they're wrong, even if that isn't what you mean to do, they might just feel condescended to and walk away.

None of this means that you can't express your own beliefs vehemently ("gay people deserve equal rights!"). It just means that when someone expresses one of theirs, interrogating them bluntly about their reasons--especially if they haven't questioned them before--is more likely to result in defensiveness than in convincing them or even productive debate. This may run counter to your instincts, understandably, but there it is.

No fuzzy words in the world will soften your language if their inflection reveals intensity and superiority. Display real respect, including learning to read your audience and back off when they're upset. (You can always return to the topic another time, and in fact, occasional light conversations will probably do a better job with this sort of person than one long intense one.) If you aren't able to show genuine respect, well, I don't blame them for refusing to discuss their beliefs with you.

Comment author: matt 01 September 2010 01:27:30AM 8 points [-]

Singularity Summit AU
Melbourne, Australia
September 7, 11, 12 2010

More information including speakers at http://summit.singinst.org.au.
Register here.

Comment author: Spurlock 01 September 2010 12:33:27PM 20 points [-]

Not sure what the current state of this issue is, apologies if it's somehow moot.

I would like to say that I strongly feel Roko's comments and contributions (save one) should be restored to the site. Yes, I'm aware that he deleted them himself, but it seems to me that he acted hastefully and did more harm to the site than he probably meant to. With his permission (I'm assuming someone can contact him), I think his comments should be restored by an admin.

Since he was such a heavy contributor, and his comments abound(ed) on the sequences (particularly Metaethics, if memory serves), it seems that a large chunk of important discussion is now full of holes. To me this feels like a big loss. I feel lucky to have made it through the sequences before his egress, and I think future readers might feel left out accordingly.

So this is my vote that, if possible, we should proactively try to restore his contributions up to the ones triggering his departure.

Comment author: Vladimir_Nesov 01 September 2010 03:29:13PM *  4 points [-]

He did give a permission to restore the posts (I didn't ask about comments), when I contacted him originally. There remains the issue of someone being technically able to restore these posts.

Comment author: matt 02 September 2010 04:16:28AM 4 points [-]

We have the technical ability, but it's not easy. We wouldn't do it without Roko's and Eliezer's consent, and a lot of support for the idea. (I wouldn't expect Eliezer to consent to restoring the last couple of days of posts/comments, but we could restore everything else.)

Comment author: wedrifid 02 September 2010 04:22:04AM 4 points [-]

It occurs to me that there is a call for someone unaffiliated to maintain a (scraped) backup of everything that is posted in order to prevent such losses in the future.

Comment author: NancyLebovitz 12 September 2010 12:10:40PM 7 points [-]

I just discovered (when looking for a comment about an Ursula Vernon essay) that the site search doesn't work for comments which are under a "continue this thread" link. This makes site search a lot less useful, and I'm wondering if that's a cause of other failed searches I've attempted here.

Comment author: steven0461 07 September 2010 04:58:42AM *  7 points [-]

In the spirit of "the world is mad" and for practical use, NYT has an article titled Forget what you know about good study habits.

Comment author: whpearson 01 September 2010 11:52:32AM 7 points [-]

I'm writing a post on systems to govern resource allocation, is anyone interested in having any input into it or just proof reading it?

This is the intro/summary:

How do we know what we know? This is an important question, however there is another question which in some ways is more fundamental, why did we choose to devote resources to knowing those things in the first place?

As a physical entity the production of knowledge take resources that could be used for other things, so the problem expands to how to use resources in general. This I'll call the resource allocation problem (RAP). This problem is widespread and occurs in the design of organisations as well as computer systems.

The problem is this, we want to allocate resources in such a fashion that enables us to achieve our goals. What makes the problem interesting is that making a decision about how to allocate resources takes resources itself. This makes the formalisation of optimal solutions to this problem seemingly impossible.

However you can formalise potential near optimality. That is look how to design systems that can change the amount of resources allocated to the different activities of the system with the minimum of overhead.

Comment author: Snowyowl 01 September 2010 12:35:46PM 6 points [-]

This sounds interesting and relevant. Here's my input: I read this back in 2008 and I am summarising it from memory, so I may make a few factual errors. But I read that one of the problem facing large Internet companies like Google is the size of their server farms, which need cooling, power, space, etc. Optimising the algorithms used can help enormously. A particular program was responsible for allocating system resources so that the systems which were operating were operating at near full capacity, and the rest could be powered down to save energy. Unfortunately, this program was executed many times a second, to the point where the savings it created were much less than the power it used. The fix was simply to execute it less often. Running the program took about the same amount of time no mater how many inefficiencies it detected, so it was not worth checking the entire system for new problems if you only expected to find one or two.

My point: To reduce resources spent on decision-making, make bigger decisions but make them less often. Small problems can be ignored fairly safely, and they may be rendered irrelevant once you solve the big ones.

Comment author: Oscar_Cunningham 01 September 2010 01:07:42PM *  4 points [-]

I was having similar thoughts the other day while watching a reality TV show where designers competed for a job from Philippe Starck. Some of them spent ages trying to think of a suitable project, and then didn't have enough time to complete it; some of them launched into the first plan they had and it turned out rubbish. Clearly they needed some meta-planning. But how much? Well, they'll need to do some meta-meta planning...

I'd be happy to give your post a read through.

ETA: The buck stops immediately, of course.

Comment author: b1shop 02 September 2010 05:01:47PM *  6 points [-]

I just listened to Robin Hanson's pale blue dot interview. It sounds like he focuses more on motives than I do.

Yes, if you give most/all people a list of biases, they will use it less like a list of potential pitfalls and more like a list of accusations. Yes, most, if not all, aren't perfect truth-seekers for reasons that make evolutionary sense.

But I wouldn't mind living in a society where using biases/logical fallacies results in a loss of status. You don't have to be a truth-seeker to want to seem like a truth-seeker. Striving to overcome bias still seems like a good goal.

Edit: For example, someone can be a truth-seeking scientist if they are doing it to answer questions or if they're doing it for the chicks.

Comment author: Morendil 01 September 2010 01:23:13PM 6 points [-]

The journalistic version:

[T]hose who abstain from alcohol tend to be from lower socioeconomic classes, since drinking can be expensive. And people of lower socioeconomic status have more life stressors [...] But even after controlling for nearly all imaginable variables - socioeconomic status, level of physical activity, number of close friends, quality of social support and so on - the researchers (a six-member team led by psychologist Charles Holahan of the University of Texas at Austin) found that over a 20-year period, mortality rates were highest for those who had never been drinkers, second-highest for heavy drinkers and lowest for moderate drinkers.

The abstract from the actual study (on "Late-Life Alcohol Consumption and 20-Year Mortality"):

Controlling only for age and gender, compared to moderate drinkers, abstainers had a more than 2 times increased mortality risk, heavy drinkers had 70% increased risk, and light drinkers had 23% increased risk. A model controlling for former problem drinking status, existing health problems, and key sociodemographic and social-behavioral factors, as well as for age and gender, substantially reduced the mortality effect for abstainers compared to moderate drinkers. However, even after adjusting for all covariates, abstainers and heavy drinkers continued to show increased mortality risks of 51 and 45%, respectively, compared to moderate drinkers. Findings are consistent with an interpretation that the survival effect for moderate drinking compared to abstention among older adults reflects 2 processes. First, the effect of confounding factors associated with alcohol abstention is considerable. However, even after taking account of traditional and nontraditional covariates, moderate alcohol consumption continued to show a beneficial effect in predicting mortality risk.

(Maybe the overlooked confounding factor is "moderation" by itself, and people who have a more relaxed, middle-of-the-road attitude towards life's pleasures tend to live longer?)

Comment author: Vladimir_M 02 September 2010 05:49:40AM *  4 points [-]

The study looks at people over 55 years of age. It is possible that there is some sort of selection effect going on -- maybe decades of heavy drinking will weed out all but the most alcohol-resistant individuals, so that those who are still drinking heavily at 55-60 without ever having been harmed by it are mostly immune to the doses they're taking. From what I see, the study controls for past "problem drinking" (which they don't define precisely), but not for people who drank heavily without developing a drinking problem, but couldn't handle it any more after some point and decided themselves to cut back.

Also, it should be noted that papers of this sort use pretty conservative definitions of "heavy drinking." In this paper, it's defined as more than 42 grams of alcohol per day, which amounts to about a liter of beer or three small glasses of wine. While this level of drinking would surely be risky for people who are exceptionally alcohol-intolerant or prone to alcoholism, lots of people can handle it without any problems at all. It would be interesting to see a similar study that would make a finer distinction between different levels of "heavy" drinking.

Comment author: cousin_it 02 September 2010 09:16:16PM *  3 points [-]

These are fine conclusions to live by, as long as moderate drinking doesn't lead you to heavy drinking, cirrhosis and the grave. Come visit Russia to take a look.

Comment author: billswift 01 September 2010 10:27:38AM *  6 points [-]

In "The Shallows", Nicholas Carr makes a very good argument that replacing deep reading books, with the necessarily shallower reading online or of hypertext in general, causes changes in our brains which makes deep thinking harder and less effective.

Thinking about "The Shallows" later, I realized that laziness and other avoidance behaviors will also tend to become ingrained in your brain, at the expense of your self-direction/self-discipline behaviors they are replacing.

Another problem with the Web, that wasn't discussed in "The Shallows", is that hypertext channels you to the connections the author chooses to present. Wide and deep reading, such that you make the information presented yours, gives you more background knowledge that helps you find your own connections. It is in the creation of your own links within your own mind that information is turned into knowledge.

Carr actually has two other general theses in the book; that neural plasticity to some degree undercuts the more extreme claims of evolutionary psych, which I have some doubts about and am doing further reading on; and he winds up with a pretty silly argument about the implausibility of AI. Fortunately, his main argument about the problems with using hypertext is totally independent of these two.

Comment author: PhilGoetz 01 September 2010 04:34:46PM 7 points [-]

I haven't read Nicholas Carr, but I've seen summaries of some of the studies used to claim that book reading results in more comprehension than hypertext reading. All the ones I saw are bogus. They all use, for the hypertext reading, a linear extract from a book, broken up into sections separated by links. Sometimes the links are placed in somewhat arbitrary places. Of course a linear text can be read more easily linearly.

I believe hypertext reading is deeper, and that this is obvious, almost true by definition. Non-hypertext reading is exactly 1 layer deep. Hypertext lets the reader go deeper. Literally. You can zoom in on any topic.

A more fair test would be to give students a topic to study, with the same material, but some given books, and some given the book material organized and indexed in a competent way as hypertext.

Wide and deep reading, such that you make the information presented yours, gives you more background knowledge that helps you find your own connections.

Hypertext reading lets you find your own connections, and lets you find background knowledge that would otherwise simply be edited out of a book.

Comment author: allenwang 01 September 2010 09:04:58PM *  7 points [-]

It seems to me that the main reason most hypertext sources seem to produce shallower reading is not the fact that it contains hypertext itself, but that the barriers of publication are so low that the quality of most written work online is usually much lower than printed material. For example, this post is something that I might have spent 3 minutes thinking about before posting, whereas a printed publication would have much more time to mature and also many more filters such as publishers to take out the noise.

It is more likely that book reading seems more deep because the quality is better.

Also, it wouldn't be difficult to test this hypothesis with print and online newspaper since they both contain the same material.

Comment author: Kaj_Sotala 02 September 2010 09:28:09PM *  10 points [-]

It seems to me like "books are slower to produce than online material, so they're higher quality" would belong to the class of statements that are true on average but close to meaningless in practice. There's enormous variance in the quality of both digital and printed texts, and whether you absorb more good or bad material depends more on which digital/print sources you seek out than on whether you prefer digital or print sources overall.

Comment author: xamdam 01 September 2010 08:55:10PM 3 points [-]

I believe hypertext reading is deeper, and that this is obvious, almost true by definition. Non-hypertext reading is exactly 1 layer deep. Hypertext lets the reader go deeper. Literally. You can zoom in on any topic.

It has deeper structure, but that is not necessarily user-friendly. A great textbook will have different levels of explanation, an author-designed depth-diving experience. Depending on author, material, you and the local wikipedia quality that might be a better or worse learning experience.

Hypertext reading lets you find your own connections, and lets you find background knowledge that would otherwise simply be edited out of a book.

Yep, definitely a benefit, but not without a trade-off. Often a good author will set you up with connections better than you can.

Comment author: jacob_cannell 01 September 2010 09:15:54PM 2 points [-]

I like allenwang's reply below, but there is another consideration with books.

Long before hyperlinks, books evolved comprehensive indices and references, and these allow humans to relatively easily and quickly jump between topics in one book and across books.

Now are the jumps we employ on the web faster? Certainly. But the difference is only quantitative, not qualitative, and the web version isn't enormously faster.

Comment author: JohnDavidBustard 01 September 2010 03:43:05PM 2 points [-]

It is very difficult to distinguish rationalisations of the discomfort of change, with actual consequences. If this belief that hypertext leads to a less sophisticated understanding than reading a book, what behaviour would change that could be measured?

Comment author: Kaj_Sotala 01 September 2010 04:46:23PM 15 points [-]

Neuroskeptic's Help, I'm Being Regressed to the Mean is the clearest explanation of regression to the mean that I've seen so far.

Comment author: Snowyowl 02 September 2010 01:24:54PM *  6 points [-]

Wow. I thought I understood regression to the mean already, but the "correlation between X and Y-X" is so much simpler and clearer than any explanation I could give.

Comment author: Vladimir_M 02 September 2010 04:00:14AM *  2 points [-]

When I tried making sense of this topic in the context of the controversies over IQ heritability, the best reference I found was this old paper:

Brian Mackenzie, Fallacious use of regression effects in the I.Q. controversy, Australian Psychologist 15(3):369-384, 1980

Unfortunately, the paper failed to achieve any significant impact, probably because it was published in a low-key journal long before Google, and it's now languishing in complete obscurity. I considered contacting the author to ask if it could be put for open access online -- it would be definitely worth it -- but I was unable to find any contact information; it seems like he retired long ago.

There is also another paper with a pretty good exposition of this problem, which seems to be a minor classic, and is still cited occasionally:

Lita Furby, Interpreting regression toward the mean in developmental research, Developmental Psychology, 8(2):172-179, 1973

Comment author: Liron 01 September 2010 05:50:16AM 14 points [-]

I made this site last month: areyou1in1000000.com

Comment author: gwern 08 September 2010 01:07:52PM *  5 points [-]

Relevant to our akrasia articles:

If obese individuals have time-inconsistent preferences then commitment mechanisms, such as personal gambles, should help them restrain their short-term impulses and lose weight. Correspondence with the bettors confirms that this is their primary motivation. However, it appears that the bettors in our sample are not particularly skilled at choosing effective commitment mechanisms. Despite payoffs of as high as $7350, approximately 80% of people who spend money to bet on their own behaviour end up losing their bets.

http://www.marginalrevolution.com/marginalrevolution/2010/09/should-you-bet-on-your-own-ability-to-lose-weight.html

Comment author: realitygrill 03 September 2010 04:18:06AM 5 points [-]

This is perhaps a bit facetious, but I propose we try to contact Alice Taticchi (Miss World Italy 2009) and introduce her to LW. Reason? She cited she'd "bring without any doubt my rationality", among other things, when asked what qualities she would bring to the competition.

Comment author: Morendil 02 September 2010 10:21:35PM 5 points [-]

I have argued in various places that self-deception is not an adaptation evolved by natural selection to serve some function. Rather, I have said self-deception is a spandrel, which means it’s a structural byproduct of other features of the human organism. My view has been that features of mind that are necessary for rational cognition in a finite being with urgent needs yield a capacity for self-deception as a byproduct. On this view, self-deception wasn’t selected for, but it also couldn’t be selected out, on pain of losing some of the beneficial features of which it’s a byproduct.

Neil Van Leuween, Why Self-Deception Research Hasn’t Made Much Progress

Comment author: beriukay 25 September 2010 09:50:05PM 4 points [-]

I participated in a survey directed at atheists some time ago, and the report has come out. They didn't mention me by name, but they referenced me on their 15th endnote, which regarded questions they said were spiritual in nature. Specifically, the question was whether we believe in the possibility of human minds existing outside of our bodies. From the way they worded it, apparently I was one of the few not-spiritual people who believed there were perfectly naturalistic mechanisms for separating consciousness from bodies.

Comment author: beriukay 11 September 2010 04:23:50AM 4 points [-]

I'm taking a grad level stat class. One of my classmates said something today that nearly made me jump up and loudly declare that he was a frequentist scumbag.

We were asked to show that a coin toss fit the criteria of some theorem that talked about mapping subsets of a sigma algebra to form a well-defined probability. Half the elements of the set were taken care of by default (the whole set S and its complement { }), but we couldn't make any claims about the probability of getting Heads or Tails from just the theorem. I was content to assume the coin was fair, or at least assign some likelihood distribution.

But not my frequentist archnemesis! He let it be known that he would level half the continent if the probability of getting Heads wasn't determined by his Expectation divided by the number of events. The number of events. Of an imaginary coin toss. Determine that toss' probability.

It occurs to me that there was a lot of set up for very little punch line in that anecdote. If you are unamused, you are in good company. I ordered R to calculate an integral for me today, and it politely replied: "Error in is.function(FUN) : 'FUN' is missing""

Comment author: eugman 10 September 2010 12:51:28PM 4 points [-]

Can anyone suggest any blogs giving advice for serious romantic relationships? I think a lot of my problems come from a poor theory of mind for my partner, so stuff like 5 love languages and stuff on attachment styles has been useful.

Thanks.

Comment author: Relsqui 16 September 2010 09:31:45PM 4 points [-]

I have two suggestions, which are not so much about romantic relationships as they are about communicating clearly; given your example and the comments below, though, I think they're the kind of thing you're looking for.

The Usual Error is a free ebook (or nonfree dead-tree book) about common communication errors and how to avoid them. (The "usual error" of the title is assuming by default that other people are wired like you--basically the same as the typical psyche fallacy. It has a blog as well, although it doesn't seem to be updated much; my recommendation is for the book.

If you're a fan of the direct practical style of something like LW, steel yourself for a bit of touchy-feeliness in UE, but I've found the actual advice very useful. In particular, the page about the biochemistry of anger has been really helpful for me in recognizing when and why my emotional response is out of whack with the reality of the situation, and not just that I should back off and cool down, but why it helps to do so. I can give you an example of how this has been useful for me if you like, but I expect you can imagine.

A related book I'm a big fan of is Nonviolent Communication (no link because its website isn't of any particular use; you can find it at your favorite book purveyor or library). Again, the style is a bit cloying, but the advice is sound. What this book does is lay out an algorithm for talking about how you feel and what you need in a situation of conflict with another person (where "conflict" ranges from "you hurt my feelings" to gang war).

I think it's noteworthy that following the NVC algorithm is difficult. It requires finding specific words to describe emotions, phrasing them in a very particular way, connecting them to a real need, and making a specific, positive, productive request for something to change. For people who are accustomed to expressing an idea by using the first words which occur to them to do so (almost everyone), this requires flexing mental muscles which don't see much use. I think of myself as a good communicator, and it's still hard for me to follow NVC when I'm upset. But the difficulty is part of the point--by forcing you to stop and rethink how you talk about the conflict, it forces you see it in a way that's less hindered by emotional reflex and more productive towards understanding what's going on and finding a solution.

Neither of these suggestions requires that your partner also read them, but it would probably help. (It just keeps you from having to explain a method you're using.)

If you find a good resource for this which is a blog, I'd be interested in it as well. Maybe obviously, this topic is something I think a lot about.

Comment author: JamesAndrix 05 September 2010 08:22:54PM 4 points [-]

Finally Prompted by this, but it would be too offtopic there

http://lesswrong.com/lw/2ot/somethings_wrong/

The ideas really started forming around the recent 'public relations' discussions.

If we want to change people's minds, we should be advertising.

I do like long drawn out debates, but most of the time they don't accomplish anything and even when they do, they're a huge use of personal resources.

There is a whole industry centered around changing people's minds effectively. They have expertise in this, and they do it way better than we do.

Comment author: Perplexed 05 September 2010 11:02:21PM 1 point [-]

The ideas really started forming around the recent 'public relations' discussions.

If we want to change people's minds, we should be advertising.

My guess is that "Harry Potter and the Methods of Rationality" is the best piece of publicity the SIAI has ever produced.

I think that the only way to top it would be a Singularity/FAI-themed computer game.

How about a turn-based strategy game where the object is to get deep enough into the singularity to upload yourself before a uFAI shows up and turns the universe into paper clips?

Maybe it would work, and maybe not, but I think that the demographic we want to reach is 4chan - teenage hackers. We need to tap into the "Dark Side" of the Cyberculture.

Comment author: ata 05 September 2010 11:46:07PM *  8 points [-]

How about a turn-based strategy game where the object is to get deep enough into the singularity to upload yourself before a uFAI shows up and turns the universe into paper clips?

I don't think that would be very helpful. Advocating rationality (even through Harry Potter fanfiction) helps because people are better at thinking about the future and existential risks when they care about and understand rationality. But spreading singularity memes as a kind of literary genre won't do that. (With all due respect, your idea doesn't even make sense: I don't think "deep enough into the singularity" means anything with respect to what we actually talk about as the "singularity" here (successfully launching a Friendly singularity probably means the world is going to be remade in weeks or days or hours or minutes, and it probably means we're through with having to manually save the world from any remaining threats), and if a uFAI wants to turn the universe into paperclips, then you're screwed anyway, because the computer you just uploaded yourself into is part of the universe.)

Unfortunately, I don't think we can get people excited about bringing about a Friendly singularity by speaking honestly about how it happens purely at the object level, because what actually needs to be done is tons of math (plus some outreach and maybe paper-writing and book-writing and eventually a lot of coding). Saving the world isn't actually going to be an exciting ultimate showdown of ultimate destiny, and any marketing and publicity shouldn't be setting people up for disappointment by portraying it as such... and it should also be making it clear that even if existential risk reduction were fun and exciting, it wouldn't be something you do for yourself because it's fun and exciting, and you don't do it because you get to affiliate with smart/high-status people and/or become known as one yourself, and you don't do it because you personally want to live forever and don't care about the rest of the world, you do it because it's the right thing to do no matter how little you personally get out of it.

So we don't want to push the public further toward thinking of the singularity as a geek / sci-fi / power-fantasy / narcissistic thing (I realize some of those are automatic associations and pattern completions that people independently generate, but that's to be resisted and refuted rather than embraced). Fiction that portrays rationality as virtuous (and transparent, as in the Rationalist Fanfiction Principle) and that portrays transhumanistic protagonists that people can identify with (or at least like) is good because it makes the right methods and values salient and sympathetic and exciting. Giving people a vision of a future where humanity has gotten its shit together as a thing-to-protect is good; anything that makes AI or the Singularity or even FAI seem too much like an end in itself will probably be detrimental, especially if it is portrayed anywhere near anthropomorphically enough for it to be a protagonist or antagonist in a video game.

Maybe it would work, and maybe not, but I think that the demographic we want to reach is 4chan - teenage hackers. We need to tap into the "Dark Side" of the Cyberculture.

Only if they can be lured to the Light Side. The *chans seem rather tribal and amoral (at least the /b/s and the surrounding culture; I know that's not the entirety of the *chans, but they have the strongest influence in those circles). If the right marketing can turn them from apathetic tribalist sociopaths into altruistic globalist transhumanists, then that's great, but I wouldn't focus limited resources in that direction. Probably better to reach out to academia; at least that culture is merely inefficient rather than actively evil.

Comment author: Perplexed 06 September 2010 12:26:51AM 2 points [-]

I don't think that would be very helpful. [And here is why...]

I am impressed. A serious and thoughtful reply to a maybe serious, but definitely not thoughtful, suggestion. Thank you.

If the right marketing can turn them [the *chans] from apathetic tribalist sociopaths into altruistic globalist transhumanists, then that's great, but I wouldn't focus limited resources in that direction. Probably better to reach out to academia; at least that culture is merely inefficient rather than actively evil.

"Actively evil" is not "inherently evil". The action currently is over on the evil side because the establishment is boring. Anti-establishment evil is currently more fun. But what happens if the establishment becomes evil and boring? Could happen on the way to a friendly singularity. Don't rule any strategies out. Thwarting a nascent uFAI may be one of the steps we need to take along the path to FAI.

Comment author: ata 06 September 2010 01:02:05AM *  5 points [-]

I am impressed. A serious and thoughtful reply to a maybe serious, but definitely not thoughtful, suggestion. Thank you.

Thank you for taking it well; sometimes I still get nervous about criticizing. :)

"Actively evil" is not "inherently evil". The action currently is over on the evil side because the establishment is boring. Anti-establishment evil is currently more fun. But what happens if the establishment becomes evil and boring? Could happen on the way to a friendly singularity. Don't rule any strategies out. Thwarting a nascent uFAI may be one of the steps we need to take along the path to FAI.

I've heard the /b/ / "Anonymous" culture described as Chaotic Neutral, which seems apt. My main concern is that waiting for the right thing to become fun for them to rebel against is not efficient. (Example: Anonymous's movement against Scientology began not in any of the preceding years when Scientology was just as harmful as always, but only once they got an embarrassing video of Tom Cruise taken down from YouTube. "Project Chanology" began not as anything altruistic, but as a morally-neutral rebellion against what was perceived as anti-lulz. It did eventually grow into a larger movement including people who had never heard of "Anonymous" before, people who actually were in it to make the world a better place whether the process was funny or not. These people were often dismissed as "moralfags" by the 4chan old-timers.) Indeed they are not inherently evil, but when morality is not a strong consideration one way or the other, it's too easy for evil to be more fun than good. I would not rely on them (or even expect them) to accomplish any long-term good when that's not what they're optimizing for.

(And there's the usual "herding cats" problem — even if something would normally seem fun to them, they're not going to be interested if they get the sense that someone is trying to use them.)

Maybe some useful goal that appeals to their sensibilities will eventually present itself, but for now, if we're thinking about where to direct limited resources and time and attention, putting forth the 4chan crowd as a good target demographic seems like a privileged hypothesis. "Teenage hackers" are great (I was one!), but I'm not sure about reaching out to them once they're already involved in 4chan-type cultures. There are probably better times and places to get smart young people interested.

Comment author: Daniel_Burfoot 01 September 2010 02:48:18AM 4 points [-]

Anyone here working as a quant in the finance industry, and have advice for people thinking about going into the field?

Comment author: kim0 01 September 2010 09:08:23AM 3 points [-]

I am, and I am planning to leave it to get a higher more average pay. From my viewpoint, it is terribly overrated and undervalued.

Comment author: Daniel_Burfoot 01 September 2010 04:18:01PM 4 points [-]

Can you expand on this? Do you think your experience is typical?

Comment author: kim0 03 September 2010 08:19:18AM 4 points [-]

Most places I have worked, the reputation of the job has been quite different from the actual job. I have compared my experiences with those of friends and colleagues, and they are relatively similar. Having a M.Sc. in physics and lots of programming experience made it possible for me to have more different kinds of engineering jobs, and thus more varied experience.

My conclusion is that the anthropic principle holds for me in the work place, so that each time I experience Dilbertesque situations, they are representative of typical work situations. So yes, I do think my work situation is typical.

My current job doing statistical analysis for stock analysts pay $ 73 000, while the average pay elsewhere is $ 120 000.

Comment author: xamdam 01 September 2010 03:34:23AM *  3 points [-]

Ping Arthur breitman fb or linked in. He is part of NYC lw meetup, and a quant at goldman.

Comment author: Will_Newsome 03 September 2010 11:02:19AM *  10 points [-]

I want to write a post about an... emotion, or pattern of looking at the world, that I have found rather harmful to my rationality in the past. The closest thing I've found is 'indignation', defined at Wiktionary as "An anger aroused by something perceived as an indignity, notably an offense or injustice." The thing is, I wouldn't consider the emotion I feel to be 'anger'. It's more like 'the feeling of injustice' in its own right, without the anger part. Frustration, maybe. Is there a word that means 'frustration aroused by a perceived indignity, notably an offense or injustice'? Like, perhaps the emotion you may feel when you think about how pretty much no one in the world or no one you talk to seems to care about existential risks. Not that you should feel the emotion, or whatever it is, that I'm trying to describe -- in the post I'll argue that you should try not to -- but perhaps there is a name for it? Anyone have any ideas? Should I just use 'indignation' and then define what I mean in the first few sentences? Should I use 'adjective indignation'? If so, which adjective? Thanks for any input.

Comment author: Airedale 03 September 2010 03:08:20PM *  8 points [-]

The words righteous indignation in combination are sufficiently well-recognized as to have their own wikipedia page. The page also says that righteous indignation has overtones of religiosity, which seems like a reason not to use it in your sense . It also says that it is akin to a "sense of injustice," but at least for me, that phrase doesn't have as much resonance.

Edited to add this possibly relevant/interesting link I came across, where David Brin describes self-righteous indignation as addictive.

Comment author: Perplexed 03 September 2010 04:20:22PM 4 points [-]

which seems like a reason not to use it in your sense.

Strikes me as exactly the reason you should use it. What you are describing is indignation, it is righteous, and it is counterproductive in both rationalists and less rational folks for pretty much the same reasons.

Comment author: jimrandomh 03 September 2010 07:08:17PM 6 points [-]

I noticed this emotion cropping up a lot when I read Reddit, and stopped reading it for that reason. It's too easy to, for example, feel outraged over a video of police brutality, but not notice that it was years ago and in another state and already resolved.

Comment author: [deleted] 09 September 2010 02:52:36AM 4 points [-]

Righteous indignation is a good word for it.

I, personally, see it as one of the emotional capacities of a healthy person. Kind of like lust. It can be misused, it can be a big time-waster if you let it occupy your whole life, but it's basically a sign that you have enough energy. If it goes away altogether, something may be wrong.

I had a period a few years ago of something like anhedonia. The thing is, I also couldn't experience righteous indignation, or nervous worry, or ordinary irritability. It was incredibly satisfying to get them back. I'm not a psychologist at all, but I think of joy, anger, and worry (and lust) as emotions that require energy. The miserably lethargic can't manage them.

So that's my interpretation and very modest defense of righteous indignation. It's not a very practical emotion, but it is a way of engaging personally with the world. It motivates you in the minimal way of making you awake, alert, and focused on something. The absence of such engagement is pretty horrible.

Comment author: komponisto 04 September 2010 12:54:13AM 4 points [-]

Interestingly enough, this sounds like the emotion that (finally) induced me to overcome akrasia and write a post on LW for the first time, which initiated what has thus far been my greatest period of development as a rationalist.

It's almost as if this feeling is to me what plain anger is to Harry Potter(-Evans-Verres): something which makes everything seem suddenly clearer.

It just goes to show how difficult the art of rationality is: the same technique that helps one person may hinder another.

Comment author: wedrifid 03 September 2010 12:09:08PM *  4 points [-]

Should I just use 'indignation' and then define what I mean in the first few sentences?

That could work well when backed up by with the description of just what you will be using the term to mean.

I will be interested to read your post - from your brief introduction here I think I have had similar observations about emotions that interfere with thought, independent of raw overwhelm from primitives like anger.

Comment author: brian_jaress 05 September 2010 10:11:18AM 3 points [-]

I've seen "moral indignation," which might fit (though I think "indignation" still implies anger). I've also heard people who feel that way describe the object of their feelings as "disgusting" or "offensive," so you could call it "disgust" or "being offended." Of course, those people also seemed angry. Maybe the non-angry version would be called "bitterness."

As soon as I wrote the paragraph above, I felt sure that I'd heard "moral disgust" before. I googled it and the second link was this. I don't know about the quality of the study, but you could use the term.

Comment author: David_Allen 04 September 2010 06:02:09AM *  3 points [-]

In myself, I have labeled the rationality blocking emotion/behavior as defensiveness. When I am feeling defensive, I am less willing to see the world as it is. I bind myself to my context and it is very difficult for me to reach out and establish connections to others.

I am also interested in ideas related to rationality and the human condition. Not just about the biases that arise from our nature, but about approaches to rationality that work from within our human nature.

I have started an analysis of Buddhism from this perspective. At its core (ignoring the obvious mysticism), I see sort of a how-to guide for managing the human condition. If we are to be rational we need to be willing to see the world as it is, not as we want it to be.

Comment author: Eliezer_Yudkowsky 03 September 2010 07:07:18PM 5 points [-]

Sounds related to the failure class I call "living in the should-universe".

Comment author: Will_Newsome 03 September 2010 10:51:03PM *  3 points [-]

It seems to be a pretty common and easily corrected failure mode. Maybe you could write a post about it? I'm sure you have lots of useful cached thoughts on the matter.

Added: Ah, I'd thought you'd just talked about it at LW meetups, but a Google search reveals that the theme is also in Above-Average AI Scientists and Points of Departure.

Comment author: Pavitra 23 September 2010 05:30:32AM 3 points [-]

In light of the news that apparently someone or something is hacking into automated factory control systems, I would like to suggest that the apocalypse threat level be increased from Guarded (lots of curious programmers own fast computers) to Elevated (deeply nonconclusive evidence consistent with a hard takeoff actively in progress).

Comment author: jimrandomh 23 September 2010 08:23:40PM *  3 points [-]

It looks a little odd for a hard takeoff scenario - it seems to be prevalent only in Iran, it seems configured to target a specific control system, and it uses 0-days wastefully (I see a claim that it uses four 0-days and 2 stolen certificates). On the other hand, this is not inconsistent with an AI going after a semiconductor manufacturer and throwing in some Iranian targets as a distraction.

My preference ordering is friendly AI, humans, unfriendly AI; my probability ordering is humans, unfriendly AI, friendly AI.

Comment author: simplicio 12 September 2010 05:10:27AM 3 points [-]

In light of XFrequentist's suggestion in "More Art, Less Stink," would anyone be interested in a post consisting of a summary & discussion of Cialdini's Influence?

This is a brilliant book on methods of influencing people. But it's not just Dark Arts - it also includes defense against the Dark Arts!

Comment author: khafra 07 September 2010 01:31:41PM 3 points [-]

The Science of Word Recognition, by a Microsoft researcher, contains tales of reasonably well done Science gone persistently awry, to the point that the discredited version is today the most popular one.

Comment author: Clippy 07 September 2010 02:19:16PM 2 points [-]

That's a really good article, the Microsoft humans really know their stuff.

Comment author: Sniffnoy 04 September 2010 08:06:14AM 3 points [-]
Comment author: blogospheroid 02 September 2010 12:32:25PM 3 points [-]

Idea - Existential risk fighting corporates

People of normal IQ are advised to work our normal day job, the best competency that we have and after setting aside enough money for ourselves, contribute to prevention of existential risk. That is a good idea if the skills of the people here are getting their correct market value and there is such a diversity of skills that they cannot make a sensible corporation together.

Also, consider that as we make the world's corporations more agile, we bring closer the moment where an unfriendly optimization process might just be let loose.

But just consider, the small probability that some of the rationalists come together as a non-profit corporation to contribute to mitigating existential risk. There are many reasons our kind cannot cooperate . Also, the fact is that coordination is hard

But if we could, then with the latest in decision theory, argument diagrams ( 1,2, 3 ), internal futarchy (after the size of the coporation gets big), we could create a corporation that wins. There are many people from the world of software here. Within the corporation itself, there is no need to stick to legacy systems. We could interact with the best of coordination software and keep the corporation "sane".

We can create products and services like any for-profit corporation and sell them at market rates, but use the surplus to mitigate existential risk. In other words, it is difficult, but in the everett branches where x-rationalists manage a synergistic outcome, it might be possible to strengthen the funding of existential risk mitigation considerably.

Some criticisms of this idea which I could think of

  • The corporation becomes a lost cause. Goodhart's law kicks in and the original purpose of forming the corporation is lost.
  • People are polite when in a situation where no important decisions are being made (like an internet forum like lesswrong), but if actual productivity is involved, they might get hostile when someone lowers their corporate karma. Perfect internet buddies might become co-workers who hate each other's guts.
  • The argument that there is no possibility of synergy. The present situation, where rational people spread over the world and in different situations are money pumping from less rational people around them is better.
  • People outside the corporation might mentally slot existential risk as a kooky topic that "that creepy company talks about all the time" and not see it as a genuine issue that diverse persons from different walks of life are interested in.

and so on..

But still, my question is - Shouldn't we atleast consider the possibilities of synergy in a manner indicated?

Comment author: wedrifid 02 September 2010 01:45:11PM *  1 point [-]

The would be more likely to work if you completely took out the 'for existential risk' part. Find a way to cooperate with people effectively "to make money". No need to get religion all muddled up in it.

Comment author: JamesAndrix 02 September 2010 01:49:42AM 3 points [-]

I would like to see more on fun theory. I might write something up, but I'd need to review the sequence first.

Does anyone have something that could turn into a top level post? or even a open thread comment?

Comment author: JohnDavidBustard 02 September 2010 08:10:52AM *  10 points [-]

I used to be a professional games programmer and designer and I'm very interested in fun. There are a couple of good books on the subject: A theory of fun and Rules of play. As a designer I spent many months analyzing sales figures for both computer games and other conventional toys. The patterns within them are quite interesting: for example child's toys pass from amorphous learning tools (bright objects and blobby humanoids), through mimicking parents (accurate baby dolls), to mimicking older children (sexualised dolls and makeup). My ultimate conclusions were that fun takes many forms whose source can be ultimately reduced to what motivates us. In effect, fun things are mental hacks of our intrinsic motivations. I gave a couple of talks on my take on what these motivations are. I'd be happy to repeat this material here (or upload and link to the videos if people prefer).

Comment author: Mass_Driver 02 September 2010 05:13:26PM 3 points [-]

I found Rules of Play to be little more than a collection of unnecessary (if clearly-defined) jargon and glittering generalities about how wonderful and legitimate games are. Possibly an alien or non-neurotypical who had no idea what a game was might gather some idea of games from reading the book, but it certainly didn't do anything for me to help me understand games better than I already do from playing them. Did I miss something?

Comment author: JohnDavidBustard 02 September 2010 05:41:35PM *  5 points [-]

Yes I take your point. There isn't a lot of material on fun, and game design analysis is often very genre specific. I like rules of play, not so much because it provides great insight into why games are fun but more as a first step towards being a bit more rigorous about what game mechanics actually are. There is definitely a lot further to go and there is a tendency to ignore the cultural and psychological motivations (e.g. why being a gangster and free roaming mechanics work well together) in favour of analysing abstract games. However it is fascinating to imagine a minimal game, in fact some of the most successful game titles have stripped the interactions down to their most basic motivating mechanics (Farmville or Diablo for example) To provide a concrete example, I worked on a game (Medievil Resurrection) where the player controlled a crossbow in a minigame, by adjusting the speed and acceleration of the mapping between joystick and bow the sensation of controlling it passed through distinct stages. As the parameters approach the sweet spot, my mind (and that of other testers) experienced a transition from feeling I was controlling the bow indirectly to feeling like I was holding the bow. Deviating slightly around this value adjusted its perceived weight, but there was a concrete point at which this sensation was lost. Although Rules of Play does not cover this kind of material it did feel for me like an attempt to examine games in a more general way so that these kinds of element could be extracted from their genre specific contexts and be understood in isolation.

Comment author: komponisto 02 September 2010 02:15:57AM *  5 points [-]

I've long had the idea of writing a sequence on aesthetics; I'm not sure if and when I'll ever get around to it, however. (I have a fairly large backlog of post ideas that have yet to be realized.)

Comment author: homunq 01 September 2010 03:52:49PM *  17 points [-]

I had a top-level post which touched on an apparently-forbidden idea downvoted to a net of around -3 and then deleted. This left my karma pinned (?) at 0 for a few months. I am not sure of the reasons for this, but suspect that the forbidden idea was partly to blame.

My karma is now back up to where I could make a top-level post. Do people think that a discussion forum on the moderation and deletion policies would be beneficial? I do, even if we all had to do silly dances to avoid mentioning the specifics of any forbidden idea(s). In my opinion, such dances are both silly and unjustified; but I promise that I'd do them and encourage them if I made such a post, out of respect for the evident opinions of others, and for the asymmetrical (though not one-sided) nature of the alleged danger.

I would not be offended if someone else "took the idea" and made such a post. I also wouldn't mind if the consensus is that such a post is not warranted. So, what do you think?

Comment author: PhilGoetz 01 September 2010 04:26:39PM 8 points [-]

If there's just one topic that's banned, then no. If it's increased to 2 topics - and "No riddle theory" is one I hadn't heard before - then maybe. Moderation and deletion is very rare here.

I would like moderation or deletion to include sending an email to the affected person - but this relies on the user giving a good email address at registration.

Comment author: Emile 01 September 2010 04:40:01PM 5 points [-]

If it's increased to 2 topics - and "No riddle theory" is one I hadn't heard before - then maybe.

I'm pretty sure that "riddle theory" is a reference to Roko's post, not a new banned topic.

Comment author: homunq 01 September 2010 04:32:40PM *  4 points [-]

My registration email is good, and I received no such email. I can also be reached under the same user name using English wikipedia's "contact user" function (which connects to the same email.)

Suggestions like your email idea would be the main purpose of having the discussion (here or on a top-level post). I don't think that some short-lived chatter would change a strongly-held belief, and I have no desire nor capability of unseating the benevolent-dictator-for-life. However, I think that any partial steps towards epistemic glasnost, such as an email to deleted post authors or at least their ability to view the responses to their own deleted post, would be helpful.

Comment author: Airedale 01 September 2010 04:35:21PM 4 points [-]

I think such discussion wouldn't necessarily warrant its own top-level post, but I think it would fit well in a new Meta thread. I have been meaning to post such a thread for a while, since there are also a couple of meta topics I would like to discuss, but I haven't gotten around to it.

Comment author: Perplexed 01 September 2010 06:47:49PM *  14 points [-]

Do people think that a discussion forum on the moderation and deletion policies would be beneficial?

I would like to see a top-level post on moderation policy. But I would like for it to be written by someone with moderation authority. If there are special rules for discussing moderation, they can be spelled out in the post and commenters can abide by them.

As a newcomer here, I am completely mystified by the dark hints of a forbidden topic. Every hypothesis I can come up with as to why a topic might be forbidden founders when I try to reconcile with the fact that the people doing the forbidding are not stupid.

Self-censorship to protect our own mental health? Stupid. Secrecy as a counter-intelligence measure, to safeguard the fact that we possess some counter-measure capability? Stupid. Secrecy simply because being a member of a secret society is cool? Stupid, but perhaps not stupid enough to be ruled out. On the other hand, I am sure that I haven't thought of every possible explanation.

It strikes me as perfectly reasonable if certain topics are forbidden because discussion of such topics has historically been unproductive, has led to flame wars, etc. I have been wandering around the internet long enough to understand and even appreciate somewhat arbitrary, publicly announced moderation policies. But arbitrary and secret policies are a prescription for resentment and for time wasted discussing moderation policies.

Edit: typo correction - insert missing words

Comment author: wnoise 01 September 2010 07:50:16PM *  5 points [-]

Self-censorship to protect our own mental health? Stupid.

My gloss on it is that this is at best a minor part, though it figures in.

The topic is an idea that has horrific implications that are supposedly made more likely the more one thinks about it. Thinking about it in order to figure out what it may be is a bad idea because you may come up with something else. And if the horrific is horrific enough, even a small rise in the probability of it happening would be very bad in expectation.

More explaining why many won't think it dangerous at all. This doesn't directly point anything out, but any details do narrow the search-space: V fnl fhccbfrqyl orpnhfr lbh unir gb ohl va gb fbzr qrpvqrqyl aba-znvafgernz vqrnf gung ner pbzzba qbtzn urer.

I personally don't buy this, and think the censorship is an overblown reaction. Accepting it is definitely not crazy, however, especially given the stakes, and I'm willing to self-censor to some degree, even though I hate the heavy-handed response.

Comment author: cata 01 September 2010 08:00:09PM 7 points [-]

Another perspective: I read the forbidden idea, understood it, but I have no sense of danger because (like the majority of humans) I don't really live my life in a way that's consistent with all the implications of my conscious rational beliefs. Even though it sounded like a convincing chain of reasoning to me, I find it difficult to have a personal emotional reaction or change my lifestyle based on what seem to be extremely abstract threats.

I think only people who are very committed rationalists would find that there are topics like this which could be mental health risks. Of course, that may include much of the LW population.

Comment author: Kaj_Sotala 02 September 2010 08:57:51PM 6 points [-]

I read the idea, but it seemed to have basically the same flaw as Pascal's wager does. On that ground alone it seemed like it shouldn't be a mental risk to anyone, but it could be that I missed some part of the argument. (Didn't save the post.)

Comment author: timtyler 08 September 2010 06:57:28AM 4 points [-]

My analysis was that it described a real danger. Not a topic worth banning, of course - but not as worthless a danger as the one that arises in Pascal's wager.

Comment author: Perplexed 01 September 2010 08:47:58PM 9 points [-]

How about an informed consent form:

  • (1) I know that the SIAI mission is vitally important.
  • (2) If we blow it, the universe could be paved with paper clips.
  • (3) Or worse.
  • (4) I hereby certify that points 1 & 2 do not give me nightmares.
  • (5) I accept that if point 3 gives me nightmares that points 1 and 2 did not give me, then I probably should not be working on FAI and should instead go find a cure for AIDS or something.
Comment author: wedrifid 02 September 2010 03:10:12AM 1 point [-]

I like it!

Although 5 could be easily replaced by "Go earn a lot of money in a startup, never think about FAI again but still donate money to SIAI because you remember that you have some good reason to that you don't want to think about explicitly."

Comment author: homunq 01 September 2010 07:29:12PM 2 points [-]

I think it's safe to tell you that your second two hypotheses are definitely not on the right track.

Comment author: Konkvistador 16 September 2010 10:39:14PM *  3 points [-]

A minute in Konkvistador's mind:

Again the very evil mind shattering secret, why do I keep running into you?

This is getting old, lots of people seem to know about it. And a few even know the evil soul wrecking idea.

The truth is out there, my monkey brains can't cope with the other's having a secret not willing to share, they may bash my skull in with a stone! I should just mass PM the guys who know about the secret in a nonconspicus way. They will drop hints, they are weak. Also traces of the relevant texts have to still be on-line.

That job advert seems to be the kind a rather small subset of organizations would put out.

That is just paranoid don't even think about that.

XXX asf ag agdlqog hh hpoq fha r wr rqw oipa wtrwz wrz wrhz. W211!!

Yay posting on Lesswrong feels like playing Call of Cthulhu!

....

These are supposed to be not only very smart, but very rational people, people you have a high opinion of, who seem to take the idea very seriously. They may be trying to manipulate you. There may be a non-trivial possibility of them being right.

....

I suddenly feel much less enthusiastic about life extension and cryonics.

Comment author: thomblake 16 September 2010 10:53:45PM 2 points [-]

I do have access to the forbidden post, and have no qualms about sharing it privately. I actually sought it out actively after I heard about the debacle, and was very disappointed when I finally got a copy to find that it was a post that I had already read and dismissed.

I don't think there's anything there, and I know what people think is there, and it lowered my estimation of the people who took it seriously, especially given the mean things Eliezer said to Roko.

Comment author: Konkvistador 16 September 2010 11:06:05PM *  3 points [-]

Can I haz evil soul crushing idea plz?

But to be serious, yes if I find the idea is foolish, the people who take it seriously, this reduces my optimism as well, just as much as malice on the part of the Lesswrong staff or just plain real dark secrets since I take clippy to be a serious and very scary threat (I hope you don't take too much offence clippy you are a wonderful poster) . I should have stated that too. But to be honest it would be much less fun knowing the evil soul crushing self-fulfilling prophecy (tm), the situation around it is hilarious.

What really catches my attention however is the thought experiment of how exactly one is supposed to quarantine a very very dangerous idea. Since in the space of all possible ideas, I'm quite sure there are a few that could prove very toxic to humans.

The LW member that take it seriously are doing a horrible job of it.

Comment author: NancyLebovitz 20 September 2010 04:24:12PM 1 point [-]

Upvoted for the cat picture.

Comment author: xamdam 05 September 2010 07:21:48PM 4 points [-]

Do people think that a discussion forum on the moderation and deletion policies would be beneficial?

Yes. I think that lack of policy 1) reflects poorly on the objectivity of moderators, even if in appearance only 2) diverts too much energy into nonproductive discussions.

Comment author: Relsqui 16 September 2010 10:08:42PM 3 points [-]

reflects poorly on the objectivity of moderators

As a moderator of a moderately large social community, I would like to note that moderator objectivity is not always the most effective way to reach the desired outcome (an enjoyable, productive community). Yes, we've compiled a list of specific actions that will result in warnings, bans, and so forth, but someone will always be able to think of a way to be an asshole which isn't yet on our list--or which doesn't quite match the way we worded it--or whatever. To do our jobs well, we need to be able to use our judgment (which is the criterion for which we were selected as moderators).

This is not to say that I wouldn't like to see a list of guidelines for acceptable and unacceptable LW posts. But I respect the need for some flexibility on the editing side.

Comment author: Emile 01 September 2010 04:37:54PM 2 points [-]

Do people think that a discussion forum on the moderation and deletion policies would be beneficial?

I don't. Possible downsides are flame wars among people who support different types of moderation policies (and there are bound to be some - self-styled rebels who pride themselves in challenging the status quo and going against groupthink are not rare on the net), and I don't see any possible upsides. Having a Benevolent Dictator For Life works quite well.

See this on Meatball Wiki, that has quite a few pages on organization of Online Communities.

Comment author: homunq 01 September 2010 05:58:07PM 7 points [-]

I don't want a revolution, and don't believe I'll change the mind of somebody committed not to thinking too deeply about something. I just want some marginal changes.

I think Roko got a pretty clear explanation of why his post was deleted. I don't think I did. I think everyone should. I suspect there may be others like me.

I also think that there should be public ground rules as to what is safe. I think it is possible to state such rules so that they are relatively clear to anyone who has stepped past them, somewhat informative to those who haven't, and not particularly inviting of experimentation. I think that the presence of such ground rules would allow some discussion as to the danger or non-danger of the forbidden idea and/or as to the effectiveness or ineffectiveness of supressing it. Since I believe that the truth is "non-danger" and "ineffectiveness", and the truth will tend to win the argument over time, I think that would be a good thing.

Comment author: timtyler 02 September 2010 08:26:15AM *  2 points [-]
Comment author: Emile 02 September 2010 08:21:43AM 2 points [-]

I think Roko got a pretty clear explanation of why his post was deleted. I don't think I did.

It's probably better to solve this by private conversation with Eliezer, than by trying to drum up support in an open thread.

Too much meta discussion is bad for a community.

Comment author: JGWeissman 01 September 2010 06:13:37PM *  0 points [-]

the truth is "non-danger"

Normally yes, but this case involves a potentially adversarial agent with intelligence and optimizing power vastly superior to your own, and which cares about your epistemic state as well as your actions.

Comment author: homunq 01 September 2010 06:49:44PM *  4 points [-]

Look, my post addressed these issues, and I'd be happy to discuss them further, if the ground rules were clear. Right now, we're not having that discussion; we're talking about whether that discussion is desirable, and if so, how to make it possible. I think that the truth will out; if you're right, you'll probably win the discussion. So although we disagree on danger, we should agree on discussing danger within some well-defined ground rules which are comprehensibly summarized in some safe form.

Comment author: billswift 01 September 2010 10:16:22AM *  6 points [-]

The key to persuasion or manipulation is plausible appeal to desire. The plausibility can be pretty damned low if the desire is strong enough.

Comment author: PeerInfinity 20 September 2010 08:19:31PM *  2 points [-]

Is there enough interest for it to be worth creating a top level post for an open thread discussing Eliezer's Coherent Extrapolated Volition document? Or other possible ideas for AGI goal systems that aren't immediately disastrous to humanity? Or is there a top level post for this already? Or would some other forum be more appropriate?

Comment author: gwern 12 September 2010 01:50:03PM 2 points [-]

The Onion parodies cyberpunk by describing our current reality: http://www.theonion.com/articles/man-lives-in-futuristic-scifi-world-where-all-his,17858/

Comment author: Cyan 11 September 2010 06:34:33PM *  2 points [-]

Nine years ago today, I was just beginning my post-graduate studies. I was running around campus trying to take care of some registration stuff when I heard that unknown parties had flown two airliners into the WTC towers. It was surreal -- at that moment, we had no idea who had done it, or why, or whether there were more planes in the air that would be used as missiles.

It was big news, and it's worth recalling this extraordinarily terrible event. But there are many more ordinary terrible events that occur every day, and kill far more people. I want to keep that in mind too, and I want to make the universe a less deadly place for everyone.

(If you feel like voting this comment up, please review this first.)

Comment author: datadataeverywhere 10 September 2010 08:12:03PM *  2 points [-]

An observer is given a box with a light on top, and given no information about it. At time t0, the light on the box turns on. At time tx, the light is still on.

At time tx, what information can the observer be said to have about the probability distribution of the duration of time that the light turns on? Obviously the observer has some information, but how is it best quantified?

For instance, the observer wishes to guess when the light will turn off, or find the best approximation of E(X | X > tx-t0), where X ~ duration of light being on. This is guaranteed to be a very uninformed guess, but some guess is possible, right?

The observer can establish a CDF of the probability of the light turning off at time t; for t <= tx, p=0. For t > tx, 0 < p < 1, assuming that the observer can never be certain that the light will ever turn off. What goes on in between is the interesting part, and I haven't the faintest idea how to justify any particular shape for the CDF.

Comment author: cousin_it 09 September 2010 10:25:53PM *  2 points [-]

The gap between inventing formal logic and understanding human intelligence is as large as the gap between inventing formal grammars and understanding human language.

Comment author: Document 06 September 2010 08:23:32PM 2 points [-]

Friday's Wondermark comic discusses a possible philosophical paradox that's similar to those mentioned at Trust in Bayes and Exterminating life is rational.

Comment author: knb 06 September 2010 06:16:03PM *  2 points [-]

Recently there was a discussion regarding Sex at Dawn. I recently skimmed this book at a friend's house, and realized that the central idea of the book is dependent on a group selection hypothesis. (The idea being that our noble savage bonobo-like hunter-gatherer ancestors evolved a preference for paternal uncertainty as this led to better in group cooperation.) This was never stated in the sequence of posts on the book. Can someone who has read the book confirm/deny the accuracy of my impression that the book's thesis relies on a group selection hypothesis?

Comment author: teageegeepea 06 September 2010 01:24:54AM 2 points [-]

Since Eliezer has talked about the truth of reductionism and the emptiness of "emergence", I thought of him when listening to Robert Laughlin on EconTalk (near the end of the podcast). Laughlin was arguing that reductionism is experimentally wrong and that everything, including the universal laws of physics, are really emergent. I'm not sure if that means "elephants all the way down" or what.

Comment author: Will_Sawin 06 September 2010 02:53:29AM 2 points [-]

It's very silly. What he's saying is that there are properties at high levels of organizations that don't exist at low levels of organizations.

As Eliezer says, emergence is trivial. Everything that isn't quarks is emergent.

His "universality" argument seems to be that different parts can make the same whole. Well of course they can.

He certainly doesn't make any coherent arguments. Maybe he does in his book?

Comment author: Perplexed 06 September 2010 03:10:35AM 3 points [-]

Yet another example of a Nobel prize winner in disagreement with Eliezer within his own discipline.

What is wrong with these guys?

Why if they would just read the sequences, they would learn the correct way for words like "reduction" and "emergence" to be used in physics.

Comment author: khafra 07 September 2010 02:44:30AM 2 points [-]

To be fair, "reductionism is experimentally wrong" is a statement that would raise some argument among Nobel laureates as well.

Comment author: Perplexed 07 September 2010 03:16:02AM 2 points [-]

Argument from some Nobelists. But agreement from others. Google on the string "Philip Anderson reductionism emergence" to get some understanding of what the argument is about.

My feeling is that everyone in this debate is correct, including Eliezer, except for one thing - you have to realize that different people use the words "reductionism" and "emergence" differently. And the way Eliezer defines them is definitely different from the way the words are used (by Anderson, for example) in condensed matter physics.

Comment author: JohnDavidBustard 02 September 2010 01:15:08PM 2 points [-]

Apologies if this question seems naive but I would really appreciate your wisdom.

Is there a reasonable way of applying probability to analogue inference problems?

For example, if two substances A and B are being measured using a device which produces an analogue value C. Given a history of analogue values, how does one determine the probability of each substance. Unless the analogue values match exactly, how can historical information contribute to the answer without making assumptions of the shape of the probability density function created by A or B? If this assumption must be made how can this be reasonably determined, and crucially, what events could occur that would lead to it being changed?

A real example would be, that the PDF is often modelled as a Gaussian distribution, but more recent approaches tend to use different distributions because of outliers. This seems like the right thing to do because our visual sense of distribution can easily identify such points, but is there any more rigorous justification?

Is, in effect, the selection of the underlying model the real challenge of rational decision making, not the inference rules?

Comment author: Perplexed 02 September 2010 01:57:10PM 4 points [-]

Is there a reasonable way of applying probability to analogue inference problems?

Your examples, certainly show a grasp of the problem. The solution is first sketched in Chapter 4.6 of Jaynes

Is, in effect, the selection of the underlying model the real challenge of rational decision making, not the inference rules?

Definitely. Jaynes finishes deriving the inference rules in Chapter 2 and illustrates how to use them in Chapter 3. The remainder of the book deals with "the real challenge". In particular Chapters 6, 7, 12, 19, and especially 20. In effect, you use Bayesian inference and/or Wald decision theory to choose between underlying models pretty much as you might have used them to choose between simple hypotheses. But there are subtleties, ... to put things mildly. But then classical statistics has its subtleties too.

Comment author: SilasBarta 01 September 2010 10:44:31PM *  2 points [-]

Grab the popcorn! Landsburg and I go at it again! (See also Previous Landsburg LW flamewar.)

This time, you get to see Landsburg:

  • attempt to prove the existence of the natural numbers while explicitly dismissing the relevance of what sense he's using "existence" to mean!
  • use formal definitions to make claims about the informal meanings of the terms!
  • claim that Peano arithmetic exists "because you can see the marks on paper" (guess it's not a platonic object anymore...)!

(Sorry, XiXiDu, I'll reply to you on his blog if my posting privileges stay up long enough ... for now, I would agree with what you said, but am not making that point in the discussion.)

Comment author: DanielVarga 02 September 2010 12:30:31AM 3 points [-]

Wow, a debate where the most reasonable-sounding person is a sysop of Conservapedia. :)

Comment author: datadataeverywhere 09 September 2010 03:14:37PM 2 points [-]

How diverse is Less Wrong? I am under the impression that we disproportionately consist of 20-35 year old white males, more disproportionately on some axes than on others.

We obviously over-represent atheists, but there are very good reasons for that. Likewise, we are probably over-educated compared to the populations we are drawn from. I venture that we have a fairly weak age bias, and that can be accounted for by generational dispositions toward internet use.

However, if we are predominately white males, why are we? Should that concern us? There's nothing about being white, or female, or hispanic, or deaf, or gay that prevents one from being a rationalist. I'm willing to bet that after correcting for socioeconomic correlations with ethnicity, we still don't make par. Perhaps naïvely, I feel like we must explain ourselves if this is the case.

Comment author: gwern 09 September 2010 04:05:23PM *  9 points [-]

This sounds like the same question as why are there so few top-notch women in STEM fields, why there are so few women listed in Human Accomplishment's indices*, why so few non-whites or non-Asians score 5 on AP Physics, why...

In other words, here be dragons.

* just Lady Murasaki, if you were curious. It would be very amusing to read a review of The Tale of Genji by Eliezer or a LWer. My own reaction by the end was horror.

Comment author: datadataeverywhere 09 September 2010 04:26:32PM 3 points [-]

That's absolutely true. I've worked for two US National Labs, and both were monocultures. At my first job, the only woman in my group (20 or so) was the administrative assistant. At my second, the numbers were better, but at both, there were literally no non-whites in my immediate area. The inability to hire non-citizens contributes to the problem---I worked for Microsoft as well, and all the non-whites were foreign citizens---but it's not as if there aren't any women in the US!

It is a nearly intractable problem, and I think I understand it fairly well, but I would very much like to hear the opinion of LWers. My employers have always been very eager to hire women and minorities, but the numbers coming out of computer science programs are abysmal. At Less Wrong, a B.S. or M.S. in a specific field is not a barrier to entry, so our numbers should be slightly better. On the other hand, I have no idea how to go about improving them.

The Tale of Genji has gone on my list of books to read. Thanks!

Comment author: gwern 09 September 2010 04:58:40PM *  6 points [-]

At Less Wrong, a B.S. or M.S. in a specific field is not a barrier to entry, so our numbers should be slightly better.

Yes, but we are even more extreme in some respects; many CS/philosophy/neurology/etc. majors reject the Strong AI Thesis (I've asked), while it is practically one of our dogmas.

The Tale of Genji has gone on my list of books to read. Thanks!

I realize that I was a bit of a tease there. It's somewhat off topic, but I'll include (some of) the hasty comments I wrote down immediately upon finishing:

The prevalence of poems & puns is quite remarkable. It is also remarkable how tired they all feel; in Genji, poetry has lost its magic and has simply become another stereotyped form of communication, as codified as a letter to the editor or small talk. I feel fortunate that my introductions to Japanese poetry have usually been small anthologies of the greatest poets; had I first encountered court poetry through Genji, I would have been disgusted by the mawkish sentimentality & repetition.

The gender dynamics are remarkable. Toward the end, one of the two then main characters becomes frustrated and casually has sex with a serving lady; it's mentioned that he liked sex with her better than with any of the other servants. Much earlier in Genji (it's a good thousand pages, remember), Genji simply rapes a woman, and the central female protagonist, Murasaki, is kidnapped as a girl and he marries her while still what we would consider a child. (I forget whether Genji sexually molests her before the pro forma marriage.) This may be a matter of non-relativistic moral appraisal, but I get the impression that in matters of sexual fidelity, rape, and children, Heian-era morals were not much different from my own, which makes the general immunity all the more remarkable. (This is the 'shining' Genji?) The double-standards are countless.

The power dynamics are equally remarkable. Essentially every speaking character is nobility, low or high, or Buddhist clergy (and very likely nobility anyway). The characters spend next to no time on 'work' like running the country, despite many main characters ranking high in the hierarchy and holding ministral ranks; the Emperor in particular does nothing except party. All the households spend money like mad, and just expect their land-holdings to send in the cash. (It is a signal of their poverty that the Uji household ever even mentions how less money is coming from their lands than used to.) The Buddhist clergy are remarkably greedy & worldly; after the death of the father of the Uji household, the abbot of the monastery he favored sends the grief-stricken sisters a note - which I found remarkably crass - reminding them that he wants the customary gifts of valuable textiles.

The medicinal practices are utterly horrifying. They seem to consist, one and all, of the following algorithm: 'while sick, pay priests to chant.' If chanting doesn't work, hire more priests. (One remarkable freethinker suggests that a sick woman eat more food.) Chanting is, at least, not outright harmful like bloodletting, but it's still sickening to read through dozens of people dying amidst chanting. In comparison, the bizarre superstitions that guide many characters' activities (trapping them in their houses on inauspicious days) are practically unobjectionable.

Comment author: timtyler 09 September 2010 08:12:51PM *  8 points [-]

How diverse is Less Wrong?

You may want to check the survey results.

Comment author: datadataeverywhere 09 September 2010 09:19:55PM 2 points [-]

Thank you very much. I looked for but failed to find this when I went to write my post. I had intended to start with actual numbers, assuming that someone had previously asked the question. The rest is interesting as well.

Comment author: cousin_it 09 September 2010 04:20:20PM *  6 points [-]

However, if we are predominately white males, why are we?

Ignoring the obviously political issue of "concern", it's fun to consider this question on a purely intellectual level. If you're a white male, why are you? Is the anthropic answer ("just because") sufficient? At what size of group does it cease to be sufficient? I don't know the actual answer. Some people think that asking "why am I me" is inherently meaningless, but for me personally, this doesn't dissolve the mystery.

Comment author: datadataeverywhere 09 September 2010 04:30:23PM 4 points [-]

The flippant answer is that a group size of 1 lacks statistical significance; at some group size, that ceases to be the case.

I asked not from a political perspective. In arguments about diversity, political correctness often dominates. I am actually interested in, among other things, whether a lack of diversity is a functional impairment for a group. I feel strongly that it is, but I can't back up that claim with evidence strong enough to match my belief. For a group such as Less Wrong, I have to ask what we miss due to a lack of diversity.

Comment author: cousin_it 09 September 2010 04:45:48PM *  5 points [-]

The flippant answer is that a group size of 1 lacks statistical significance; at some group size, that ceases to be the case.

The flippant answer to your answer is that you didn't pick LW randomly out of the set of all groups. The fact that you, a white male, consistently choose to join groups composed mostly of white males - and then inquire about diversity - could have any number of anthropic explanations from your perspective :-) In the end it seems to loop back into why are you, you again.

ETA: apparenty datadataeverywhere is female.

Comment author: Perplexed 09 September 2010 04:35:54PM 4 points [-]

I generally agree with your assessment. But I think there may be more East and South Asians than you think, more 36-80s and more 15-19s too. I have no reason to think we are underrepresented in gays or in deaf people.

My general impression is that women are not made welcome here - the level of overt sexism is incredibly high for a community that tends to frown on chest-beating. But perhaps the women should speak for themselves on that subject. Or not. Discussions on this subject tend to be uncomfortable, Sometimes it seems that the only good they do is to flush some of the more egregious sexists out of the closet.

Comment author: timtyler 09 September 2010 08:09:57PM *  2 points [-]

But perhaps the women should speak for themselves on that subject.

We have already had quite a lot of that.

Comment author: Perplexed 09 September 2010 08:44:56PM 2 points [-]

OMG! A whole top-level-posting. And not much more than a year ago. I didn't know. Well, that shows that you guys (and gals) have said all that could possibly need to be said regarding that subject. ;)

But thx for the link.

Comment author: NancyLebovitz 09 September 2010 04:41:08PM *  6 points [-]

I've been thinking that there are parallels between building FAI and Talmud-- it's an effort to manage an extremely dangerous, uncommunicative entity through deduction. (An FAI may be communicative to some extent. An FAI which hasn't been built yet doesn't communicate.)

Being an atheist doesn't eliminate cultural influence. Survey for atheists: which God do you especially not believe in?

I was talking about FAI with Gene Treadwell, who's black. He was quite concerned that the FAI would be sentient, but owned and controlled.

This doesn't mean that either Eliezer or Gene are wrong (or right for that matter), but it suggests to me that culture gives defaults which might be strong attractors. [1]

He recommended recruiting Japanese members, since they're more apt to like and trust robots.

I don't know about explaining ourselves, but we may need more angles on the problem just to be able to do the work.

[1] See also Timothy Leary's S.M.I.2L.E.-- Space Migration, Increased Intelligence, Life Extension. Robert Anton Wilson said that was match for Catholic hopes of going to heaven, being trajnsfigured, and living forever.

Comment author: Konkvistador 16 September 2010 06:50:35PM *  3 points [-]

He recommended recruiting Japanese members, since they're more apt to like and >trust robots.

He has a very good point. I was surprised more Japanese or Koreans hadn't made their way to Lesswrong. This was my motivation for first proposing we recruit translators for Japanese and Chinese and to begin working towards a goal of making at least the sequences available in many languages.

Not being a native speaker of English proved a significant barrier for me in some respects. The first noticeable one was spelling, I however solved the problem by outsourcing this part of the system known as Konkvistador to the browser. ;) Other more insidious forms of miscommunication and cultural difficulties persist.

Comment author: Wei_Dai 18 September 2010 07:48:05PM 3 points [-]

I'm not sure that it's a language thing. I think many (most?) college-educated Japanese, Koreans, and Chinese can read and write in English. We also seem to have more Russian LWers than Japanese, Koreans, and Chinese combined.

According to a page gwern linked to in another branch of the thread, among those who got 5 on AP Physics C in 2008, 62.0% were White and 28.3% were Asian. But according to the LW survey, only 3.8% of respondents were Asian.

Maybe there is something about Asian cultures that makes them less overtly interested in rationality, but I don't have any good ideas what it might be.

Comment author: Konkvistador 16 September 2010 06:40:35PM *  2 points [-]

I don't know why you presume that because we are mostly 25-35 something White males a reasonable proportion of us are not deaf, gay or disabled (one of the top level posts is by someone who will soon deal with being perhaps limited to communicating with the world via computer)

I smell a whiff of that weird American memplex for minority and diversity that my third world mind isn't quite used to, but which I seem to encounter more and more often, you know the one that for example uses the word minority to describe women.

Also I decline to invitation to defend this community for lack of diversity, I don't see it as a prior a thing in need of a large part of our attention. Rationality is universal, however not in the sense of being equally universally valued in different cultures but certainly universally effective (rationalists should win). One should certainly strive to keep a site dedicated to refining the art free of unnecessary additional barriers to other people. I think we should eventually translate many articles into Hindi, Japanese, Chinese, Arab, German, Spanish, Russian and French. However its ridiculous to imagine that our demographics will somehow come to resemble and match a socio-economic adjusted mix of unspecified ethnicities that you seem to hunt for after we eliminate all such barriers. I assure you White Westerners have their very very insane spots, we deal with them constantly, but God for starters isn't among them, look at GSS or various sources on Wikipedia and consider how much more a thought stopper and a boo light atheism is for a large part of the world, what should the existing population of LessWrong do? Refrain from bashing theism? This might incur down votes, but Westerners did come up with the scientific method and did contribute disproportionately to the fields of statistics and mathematics, is it so unimaginable that developed world (Iceland, Italy, Switzerland, Finland, America, Japan, Korea, Singapore, Taiwan ect.) and their majority demographics still have a more overall rationality friendly climate (due to the caprice of history) than basically any part of the world? I freely admit my own native culture (though I'm probably thoroughly Westernised by now due to late childhood influences of mass media and education) is probably less rational than the Anglosaxon one. However simply going on a "crusade" to make other cultures more rational first since they are "clearly" more in need is besides sending terribly bad signals as well as the potential for self-delusion perhaps a bad idea for humanitarian reasons.

Sex ration: There are some differences in aptitude, psychology and interests that ensure that compsci and mathematics, at least at the higher levels will remain disproportionately male for the foreseeable future (until human modification takes off). This obviously affects our potential pool of recruits.

Age: People grow more conservative as they age, Lesswrong is firstly available only on a relatively a new medium, secondly has a novel approach to popularizing rationality. Also as people age the mind unfortunately do deteriorate. Very few people have a IQ high enough to master difficult fields before they are 15, and even their interests are somewhat affected by their peers.

I am sure I am rationalizing at least a few of these points, however I need to ask you is pursuing some popular concept of diversity (why did you for example not commend LW on its inclusion of non-neurotypicals who are often excluded in some segments of society? Also why do you only bemoan the under-representation of groups everyone else does? Is this really a rational approach? Why don't we go study where the in the memspace we might find truly valuable perspectives and focus on those? Maybe they overlap with the popular kinds, maybe they don't, but can we really trust popular culture and especially the standard political discourse on this? ) is truly cost-effective at this point?

Comment author: datadataeverywhere 17 September 2010 05:19:36AM *  1 point [-]

If you read my comment, you would have seen that I explicitly assume that we are not under-represented among deaf or gay people.

I smell a whiff of that weird American memplex [...] you know the one that for example uses the word minority to describe women.

If less than 4% of us are women, I am quite willing to call that a minority. Would you prefer me to call them an excluded group?

but God for starters isn't among them

I specifically brought up atheists as a group that we should expect to over-represent. I'm also not hunting for equal-representation among countries, since education obviously ought to make a difference.

There are some differences in aptitude, psychology and interests that ensure that compsci and mathematics, at least at the higher levels will remain disproportionately male

That seems like it ought to get many more boos around here than mentioning the western world as the source of the scientific method. I ascribe differences in those to cultural influences; I don't claim that aptitude isn't a factor, but I don't believe it has been or can easily be measured given the large cultural factors we have.

age

This also doesn't bother me, for reasons similar to yours. As a friend of mine says, "we'll get gay rights by outliving the homophobes".

why do you only bemoan the under-representation of groups everyone else does?

Which groups should I pay more attention to? This is a serious question, since I haven't thought too much about it. I neglect non-neurotypicals because they are overrepresented in my field, so I tend to expect them amongst similar groups.

I wasn't actually intending to bemoan anything with my initial question, I was just curious. I was also shocked when I found out that this is dramatically less diverse than I thought, and less than any other large group I've felt a sort of membership in, but I don't feel like it needs to be demonized for that. I certainly wasn't trying to do that.

Comment author: Konkvistador 18 September 2010 06:40:01PM *  2 points [-]

I neglect non-neurotypicals because they are overrepresented in my field, so I tend to expect them amongst similar groups.

How do you know non-neurotypicals aren't over or under represented on Lesswrong as compared to the groups that you claim are overrepresented on Lesswrong compared to your field the same way you know that the groups you bemoan are lacking are under-represented relative to your field?

Is it just because being neurotypical is harder to measure and define? I concede measuring who is a woman or a man or who is considered black and who is considered asian is for the average case easier than being neurotpyical. But when it comes to definition those concepts seem to be in the same order of magnitude of fuzzy as being neurotypical (sex is a less, race is a bit more).

Also previously you established you don't want to compare Less wrongs diversity to the entire population of the world. I'm going to tentatively go that you also accept that academic background will affect if people can grasp or are interested in learning certain key concepts needed to participate.

My question now is, why don't we crunch the numbers instead of people yelling "too many!", "too few!" or "just right!"? We know from which countries and in what numbers visitors come from, we know the educational distributions in most of them. And we know how large a fraction of this group is proficient enough English to participate meaningfully on Less wrong.

This is ignoring the fact that the only data we have on sex or race is a simple self reported poll and our general impression.

But if we crunch the numbers and the probability densities end up looking pretty similar from the best data we can find, well why is the burden of proof that we are indeed wasting potential on Lesswrong and not the one proposing policy or action to improve our odds of progressing towards becoming more rational? And if we are promoting our member's values, even when they aren't neutral or positive towards reaching our objectives why don't we spell them out as long as they truly are common! I'm certainly there are a few, perhaps the value of life and existence (thought these have been questioned and debated here too) or perhaps some utilitarian principles.

But how do we know any position people take would really reflect their values and wouldn't jut be status signalling? Heck many people who profess their values include or don't include a certain inherent "goodness" to existence probably do for signalling reasons and would quickly change their minds in a different situation!

Not even mentioning the general effect of the mindkiller.

But like I have stated before, there are certainly many spaces where we can optimize the stated goal by outreach. This is why I think this debate should continue but with a slightly different spirit. More in line with, to paraphrase you:

Which groups should we pay more attention to? This is a serious question, since we haven't thought too much about it.

Comment author: Konkvistador 18 September 2010 05:51:43PM *  2 points [-]

If less than 4% of us are women, I am quite willing to call that a minority. Would you >prefer me to call them an excluded group

I'm talking about the Western memplex whose members employ uses the word minority when describing women in general society. Even thought they represent a clear numerical majority.

I was suspicious that you used the word minority in that sense rather than the more clearly defined sense of being a numerical minority.

Sometimes when talking about groups we can avoid discussing which meaning of the word we are employing.

Example: Discussing the repression of the Mayan minority in Mexico.

While other times we can't do this.

Example: Discussing the history and current relationship between the Arab upper class minority and slavery in Mauritania.

This (age) also doesn't bother me, for reasons similar to yours.

Ah, apologies I see I carried it over from here:

How diverse is Less Wrong? I am under the impression that we disproportionately >consist of 20-35 year old white males, more disproportionately on some axes than >on others.

You explicitly state later that you are particularly interested in this axis of diversity

However, if we are predominately white males, why are we?

Perhaps this would be more manageable if looked at each of the axis of variability that you raise talk about it independently in as much as this is possible? Again, this is why I previously got me confused by speaking of "groups we usually consider adding diversity", are there certain groups that are inherently associated with the word diversity? Are we using the word diversity to mean something like "proportionate representation of certain kinds of people in all groups" or are we using the world diversity in line with infinite diversity in Infinite combinations where if you create a mix of 1 part people A and 4 parts people B and have them coexist and cooperate with another one that is 2 part people A and 3 parts people B, where previously all groups where of the first kind, creating a kind of metadiversity (by using the word diversity in its politically charged meaning)?

I specifically brought up atheists as a group that we should expect to over-represent. I'm also not hunting for equal-representation among countries, since education obviously ought to make a difference.

Then why aren you hunting for equal representation on LW between different groups united in a space as arbitrary as one defined by borders?

mentioning the western world as the source of the scientific method.

While many important components of the modern scientific method did originate among scholars in Persian and Iraq in the medieval era, its development over the past 700 years has been disproportionately seen in Europe and later its colonies. I would argue its adoption was a part of the reason for the later (lets say last 300 years) technological superiority of the West.

Edit: I wrote up quite a long wall of text. I'm just going to split it into a few posts as to make it more readable as well as give me a better sense of what is getting up or downvoted based on its merit or lack of there of.

Comment author: wedrifid 18 September 2010 06:50:07PM *  2 points [-]

That seems like it ought to get many more boos around here than mentioning the western world as the source of the scientific method. I ascribe differences in those to cultural influences;

Given new evidence from the ongoing discussion I retract my earlier concession. I have the impression that the bottom line preceded the reasoning.

Comment author: datadataeverywhere 18 September 2010 10:22:41PM *  2 points [-]

I expected your statement to get more boos for the same reason that you expected my premise in the other discussion to be assumed because of moral rather than evidence-based reasons. That is, I am used to other members of your species (I very much like that phrasing) to take very strong and sudden positions condemning suggestions of inherent inequality between the sexes, regardless of having a rational basis. I was not trying to boo your statement myself.

That said, I feel like I have legitimate reasons to oppose suggestions that women are inherently weaker in mathematics and related fields. I mentioned one immediately below the passage you quoted. If you insist on supporting that view, I ask that you start doing so by citing evidence, and then we can begin the debate from there. At minimum, I feel like if you are claiming women to be inherently inferior, the burden of proof lies with you.

Edit: fixed typo

Comment author: Will_Newsome 19 September 2010 05:56:35AM 4 points [-]

Mathematical ability is most remarked on at the far right of the bell curve. It is very possible (and there's lots of evidence to support the argument) that women simply have lower variance in mathematical ability. The average is the same. Whether or not 'lower variance' implies 'inherently weaker' is another argument, but it's a silly one.

I'm much too lazy to cite the data, but a quick Duck Duck Go search or maybe Google Scholar search could probably find it. An overview with good references is here.

Comment author: [deleted] 19 September 2010 11:25:06PM 3 points [-]

Is mathematical ability a bell curve?

My own anecdotal experience has been that women are rare in elite math environments, but don't perform worse than the men. That would be consistent with a fat-tailed rather than normal distribution, and also with higher computed variance among women.

Also anecdotal, but it seems that when people come from an education system that privileges math (like Europe or Asia as opposed to the US) the proportion of women who pursue math is higher. In other words, when you can get as much social status by being a poly sci major as a math major, women tend not to do math, but when math is very clearly ranked as the "top" or "most competitive" option throughout most of your educational life, women are much more likely to pursue it.

Comment author: Will_Newsome 20 September 2010 12:06:59AM 4 points [-]

Is mathematical ability a bell curve?

I have no idea; sorry, saying so was bad epistemic hygiene. I thought I'd heard something like that but people often say bell curve when they mean any sort of bell-like distribution.

Also anecdotal, but it seems that when people come from an education system that privileges math (like Europe or Asia as opposed to the US) the proportion of women who pursue math is higher.

I'm left confused as to how to update on this information... I don't know how large such an effect is, nor what the original literature on gender difference says, which means that I don't really know what I'm talking about, and that's not a good place to be. I'll make sure to do more research before making such claims in the future.

Comment author: datadataeverywhere 19 September 2010 06:34:32PM 2 points [-]

I'm not claiming that there aren't systematic differences in position or shape of the distribution of ability. What I'm claiming is that no one has sufficiently proved that these differences are inherent.

I can think of a few plausible non-genetic influences that could reduce variance, but even if none of those come into play, there must be others that are also possibilities. Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent, but also why I believe that this is such a difficult task?

Comment author: wedrifid 19 September 2010 07:03:49PM 1 point [-]

Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent

Either because you don't understand how bayesian evidence works or because you think the question is social political rather than epistemic.

, but also why I believe that this is such a difficult task?

That was the point of making the demand.

You cannot change reality by declaring that other people have 'burdens of proof'. "Everything is cultural" is not a privileged hypothesis.

Comment author: Perplexed 19 September 2010 07:24:33PM 1 point [-]

Do you see why I'm placing the burden of proof on you to show that differences are biologically inherent

Either because you don't understand how bayesian evidence works or because you think the question is social political rather than epistemic.

It might have been marginally more productive to answer "No, I don't see. Would you explain?" But, rather than attempting to other-optimize, I will simply present that request to datadataeverywhere. Why is the placement of "burden" important? With this supplementary question: Do you know of evidence strongly suggesting that different cultural norms might significantly alter the predominant position of the male sex in academic mathematics?

... but also why I believe that this is such a difficult task?

I can certainly see this as a difficult task. For example, we can imagine that fictional rational::Harry Potter and Hermione were both taught as children that it is ok to be smart, but that only Hermione was instructed not to be obnoxiously smart. This dynamic, by itself, would be enough to strongly suppress the numbers of women to rise to the highest levels in math.

But producing convincing evidence in this area is not an impossible task. For example, we can empirically assess the impact of the above mechanism by comparing the number of bright and very bright men and women who come from different cultural backgrounds.

Rather than simply demanding that your interlocutor show his evidence first, why not go ahead and show yours?

Comment author: datadataeverywhere 20 September 2010 02:47:31AM 1 point [-]

But producing convincing evidence in this area is not an impossible task. For example, we can empirically assess the impact of the above mechanism by comparing the number of bright and very bright men and women who come from different cultural backgrounds.

I agree, and this was what I meant. Distinguishing between nature and nurture, as wedrifid put it, is a difficult but not impossible task.

Why is the placement of "burden" important? With this supplementary question: Do you know of evidence strongly suggesting that different cultural norms might significantly alter the predominant position of the male sex in academic mathematics?

I hope I answered both of these in my comment to wedrifid below. Thank you for bothering to take my question at face value (as a question that requests a response), instead of deciding to answer it with a pointless insult.

Comment author: wedrifid 19 September 2010 04:43:18AM 4 points [-]

If you insist on supporting that view

Absolutely not. In general people overestimate the importance of 'intrinsic talent' on anything. The primary heritable component of success in just about anything is motivation. Either g or height comes second depending on the field.

Comment author: datadataeverywhere 19 September 2010 05:13:42AM 2 points [-]

I agree. I think it is quite obvious that ability is always somewhat heritable (otherwise we could raise our pets as humans), but this effect is usually minimal enough to not be evident behind the screen of either random or environmental differences. I think this applies to motivation as well!

And that was really what my claim was; anyone who claims that women are inherently less able in mathematics has to prove that any measurable effect is distinguishable from and not caused by cultural factors that propel fewer women to have interest in mathematics.

Comment author: Konkvistador 18 September 2010 06:39:42PM *  2 points [-]

I ascribe differences in those to cultural influences;I don't claim that aptitude isn't a factor, but I don't believe it has been or can easily be measured given the large cultural factors we have.

But if we can't measure the cultural factors and account for them why presume a blank slate approach? Especially since there is sexual dimorphism in the very nervous and endocrine system.

I think you got stuck on the aptitude, to elaborate, I'm pretty sure considering that humans aren't a very sexually dimorphous species (there are near relatives that are less however, example: Gibons), the mean g (if such a thing exists) of both men and women is probably about the same. There are however other aspects of succeeding at compsci or math than general intelligence.

Assuming that men and women carrying the exactly the same mems will respond on average identically to identical situations is a extraordinary claim. I'm struggling to come up with a evolutionary model that would square this with what is known (for example the greater historical reproductive success of the average woman vs. the average man that we can read from the distribution of genes). If I was presented with empirical evidence then this would be just too bad for the models, but in the absence of meaningful measurement (by your account), why not assign greater probability to the outcome proscribed by the same models that work so well when tested by other empirical claims?

I would venture to state that this case is especially strong for preferences.

And if you are trying to fine tune the situations and memes that both men and women for each gender so as to to balance this, where can one demonstrate that this isn't a step away rather than toward improving pareto efficiency? And if its not, why proceed with it?

Also to admit a personal bias I just aesthetically prefer equal treatment whenever pragmatic concerns don't trump it.

Comment author: lmnop 18 September 2010 07:41:09PM *  4 points [-]

But if we can't measure the cultural factors and account for them

We can't directly measure them, but we can get an idea of how large they are and how they work.

For example, the gender difference in empathic abilities. While women will score higher on empathy on self report tests, the difference is much smaller on direct tests of ability, and often nonexistent on tests of ability where it isn't stated to the participant that it's empathy being tested. And then there's the motivation of seeming empathetic. One of the best empathy tests I've read about is Ickes', which worked like this: two participants meet together in the room and have a brief conversation, which is taped. Then they go into separate rooms and the tape is played back to them twice. The first time, they jot down the times at which they remember feeling various emotions. The second time, they jot down the times at which they think their partner is feeling an emotion, and what it is. Then the records are compared, and each participant receives an accuracy score. When the test is run is like this, there is no difference in ability between men and women. However, a difference emerges when another factor is added: each participant is asked to write a "confidence level" for each prediction they make. In that procedure, women score better, presumably because their desire to appear empathetic (write down higher confidence levels) causes them to put more effort into the task. But where do desires to appear a certain way come from? At least partly from cultural factors that dictate how each gender is supposed to appear. This is probably the same reason why women are overconfident in self reporting their empathic abilities relative to men.

The same applies to math. Among women and men with the same math ability as scored on tests, women will rate their own abilities much lower than the men do. Since people do what they think they'll be good at, this will likely affect how much time these people spend on math in future, and the future abilities they acquire.

And then there's priming. Asian American women do better on math tests when primed with their race (by filling in a "race" bubble at the top of the test) than when primed with their gender (by filling in a "sex" bubble). More subtly, priming affects people's implicit attitudes towards gender-stereotyped domains too. People are often primed about their gender in real life, each time affecting their actions a little, which over time will add up to significant differences in the paths they choose in life in addition to that which is caused by innate gender differences. Right now we don't have enough information to say how much is caused by each, but I don't see why we can't make more headway into this in the future.

Comment author: Emile 09 September 2010 04:25:55PM 2 points [-]

There's nothing about being white, or female, or hispanic, or deaf, or gay that prevents one from being a rationalist.

I may be wrong, but I don't expect the proportion of gays in LessWrong to be very different from the proportion in the population at large.

Comment author: thomblake 16 September 2010 08:00:03PM 5 points [-]

I may be wrong, but I don't expect the proportion of gays in LessWrong to be very different from the proportion in the population at large.

My vague impression is that the proportion of people here with sexual orientations that are not in the majority in the population is higher than that of such people in the population.

This is probably explained completely by Lw's tendency to attract <strike>weirdos</strike> people who are willing to question orthodoxy.

Comment author: Perplexed 09 September 2010 03:26:45AM 1 point [-]

Wow! I just lost 50 points of karma in 15 minutes. I haven't made any top level posts, so it didn't happen there. I wonder where? I guess I already know why.

Comment author: RobinZ 09 September 2010 03:49:14AM 3 points [-]

While katydee's story is possible (and probable, even), it is also possible that someone is catching up on their Less Wrong reading for a substantial recent period and issuing many votes (up and down) in that period. Some people read Less Wrong in bursts, and some of those are willing to lay down many downvotes in a row.

Comment author: katydee 09 September 2010 03:43:54AM 2 points [-]

It is possible that someone has gone through your old comments and systematically downvoted them-- I believe pjeby reported that happening to him at one point.

In the interest of full disclosure, I have downvoted you twice in the last half hour and upvoted you once. It's possible that fifty other people think like me, but if so you should have very negative karma on some posts and very positive karma on others, which doesn't appear to be the case.

Comment author: Perplexed 09 September 2010 03:55:27AM 2 points [-]

I think you are right about the systematic downvoting. I've noticed and not minded the downvotes on my recent controversial postings. No hard feelings. In fact, no real hard feelings toward whoever gave me the big hit - they are certainly within their rights and I am certainly currently being a bit of an obnoxious bastard.

Comment author: Perplexed 12 September 2010 07:25:43PM 1 point [-]

And now my karma has jumped by more than 300 points! WTF? I'm pretty sure this time that someone went through my comments systematically upvoting. If that was someone's way of saying "thank you" ... well ... you are welcome, I guess. But isn't that a bit much?

Comment author: JanetK 02 September 2010 07:54:27AM 1 point [-]

The penny has just dropped! When I first encountered LessWrong, the word 'Rationality' did not stand out. I interpreted it to mean its everyday meaning of careful, intelligent, sane, informed thought (in keeping with 'avoiding bias'). But I have become more and more uncomfortable with the word because I see it having a more restricted meaning in the LW context. At first, I thought this was an economic definition of the 'rational' behaviour of the selfish and unemotional ideal economic agent. But now I sense an even more disturbing definition: rational as opposed to empirical. As I use scientific evidence as the most important arbiter of what I believe, I would find the anti-empirical idea of 'rational' a big mistake.

Comment author: thomblake 02 September 2010 05:19:34PM 3 points [-]

The philosophical tradition of 'Rationalism' (opposed to 'Empiricism') is not relevant to the meaning here. Though there is some relationship between it and "Traditional Rationality" which is referenced sometimes.

Comment author: Emile 02 September 2010 08:15:56AM 2 points [-]

But now I sense an even more disturbing definition: rational as opposed to empirical.

I don't think that's how most people here understand "rationalism".

Comment author: JanetK 02 September 2010 09:09:40AM 1 point [-]

I don't think that's how most people here understand "rationalism".

Good

Comment author: timtyler 02 September 2010 08:39:23AM *  1 point [-]

There is at least one post about that - though I don't entirely approve of it.

Occam's razor is not exactly empirical. Evidence is involved - but it does let you choose between two theories both of which are compatible with the evidence without doing further observations. It is not empirical - in that sense.

Comment author: Kenny 03 September 2010 10:56:39PM 2 points [-]

Occam's razor isn't empirical, but it is the economically rational decision when you need to use one of several alternative theories (that are exactly "compatible with the evidence"). Besides, "further observations" are inevitable if any of your theories are actually going to be used (i.e. to make predictions [that are going to be subsequently 'tested']).

Comment author: kodos96 02 September 2010 08:33:28AM 1 point [-]

But now I sense an even more disturbing definition: rational as opposed to empirical.

Ummmmmmmm.... no.

The word "rational" is used here on LW in essentially its literal definition (which is not quite the same as its colloquial everyday meaning).... if anything it is perhaps used by some to mean "bayesian"... but bayesianism is all about updating on (empirical) evidence.

Comment author: allenwang 12 September 2010 04:11:17AM 1 point [-]

I have been following this site for almost a year now and it is fabulous, but I haven't felt an urgent need to post to the site until now. I've been working on a climate change project with a couple of others and am in desperate need of some feedback.

I know that climate change isn't a particularly popular topic on this website (but I'm not sure why, maybe I missed something, since much of the website seems to deal with existential risk. Am I really off track here?), but I thought this would be a great place to air these ideas. Our approach tries to tackle the irrational tangle that many of our institutions appear to be caught up in, so I thought this would be the perfect place to get some expertise. The project is kind of at a standstill, and it really needs some advice and leads (and collaborators), so please feel free to praise, criticize, advise, or even join.

I saw orthonormal's "welcome to LessWrong post," so I guess this is where to post before I build up enough points. I hope it isn't too long of an introductory post for this thread?

The aim of the project is to achieve a population that is more educated in the basics of climate change science and policy, with the hope that a more educated voting public will be a big step towards achieving the policies necessary to deal with climate change.

The basic problem of educating the public about climate change is twofold. First, people sometimes get trapped into “information cocoons” (I am using Cass Sunstein’s terminology from his book Infotopia). Information cocoons are created when the news and information people seek out and surround themselves with is biased by what they already know. They are either completely unaware of competing evidence or if they are, they revise their network of beliefs to deny the credibility of those who offer it rather than consider it serious evidence. Usually, this is because they believe it is more probable that those people are not credible than that they could be wrong. This problem has always existed, and has perhaps increased since the rise of the personalized web. People who are trapped in information cocoons of denial of anthropogenic climate change will require much more evidence and counterarguments before they can begin to revise an entire network of beliefs that support their current conclusions.

Second, the population is uneducated about climate change because they lack the incentive to learn about the issues. Although we would presumably benefit if everyone were to take the time to thoroughly understand the issue, the individual cost and benefit of doing so actually runs the other way. Because the benefits of better policies accrue to everybody, but the costs are borne by the individual, people have an incentive to free ride, to let everybody else worry about the issue because either way, their individual contribution means little, and everybody else can make the informed decision. But of course, with everybody reasoning in this way there is a much lower level of education on these issues than optimal (or even necessary to create the necessary change, especially if there are interest groups with opposing goals).

The solution is to institute some system that can crack into these information cocoons and at the same time provide wide-ranging personal incentives for participating. For the former, we propose to develop a layman’s guide to climate change science and economic and environmental policy. Many of these are already in existence, although we have some different ideas about how to make it more transparent to criticism and more thorough in its discussion of epistemic uncertainty surrounding the whole issue. There is definitely a lot we can learn from LessWrong on this point). Also, I think we have a unique idea about developing a system of personal incentives. I will discuss this latter issue first.

Comment author: CronoDAS 04 September 2010 07:00:13AM *  1 point [-]
Comment author: taw 04 September 2010 06:04:29AM 1 point [-]

A question about modal logics.

Temporal logics are quite successful in terms of expressiveness and applications in computer science, so I thought I'd take a look at some other modal logics - in particular deontic logic that deal with obligations, rules, and deontological ethics.

It seems like an obvious approach, as we want to have "is"-statements, "ought"-statements, and statements relating what "is" with what "ought" to be.

What I found was rather disastrous, far worse than with neat and unambiguous temporal logics. Low expressiveness, ambiguous interpretations, far too many paradoxes that seem to be more about failing to specify underlying logic correctly than about actual problems, and no convergence on a single deontic logic than works.

After reading all this, I made a few quick attempts at defining logic of obligations, just to be sure it's not some sort of collective insanity, but they all ran into very similar problems extremely quickly.

Now I'm in no way deontologically inclined, but if I were it would really bother me. If it's really impossible to formally express obligations, this kind of ethics is built on extremely flimsy basis. Consequentialism has plenty of problems in practice, but at least in hypothetical scenarios it's very easy to model correctly. Deontic logic seems to lack even that.

Is there any kind of deontic logic that works well that I missed? I'm not talking about solving FAI, constructing universal rules of morality or anything like it - just about a language that expresses exactly the kind of obligations we want, and which works well in simple hypothetical worlds.

Comment author: steven0461 01 September 2010 10:07:46PM 1 point [-]

Does anyone else think it would be immensely valuable if we had someone specialized (more so than anyone currently is) at extracting trustworthy, disinterested, x-rationality-informed probability estimates from relevant people's opinions and arguments? This community already hopefully accepts that one can learn from knowing other people's opinions without knowing their arguments; Aumann's agreement theorem, and so forth. It seems likely to me that centralizing that whole aspect of things would save a ton of duplicated effort.

Comment author: Vladimir_Nesov 01 September 2010 10:11:33PM *  6 points [-]

This community already hopefully accepts that one can learn from knowing other people's opinions without knowing their arguments; Aumann's agreement theorem, and so forth.

I don't think Aumann's agreement theorem has anything to do with taking people's opinions as evidence. Aumann's agreement theorem is about agents turning out to have been agreeing all along, given certain conditions, not about how to come to an agreement, or worse how to enforce agreement by responding to others' beliefs.

More generally (as in, not about this particular comment), the mentions of this theorem on LW seem to have degenerated into applause lights for "boo disagreement", having nothing to do with the theorem itself. It's easier to use the associated label, even if such usage would be incorrect, but one should resist the temptation.

Comment author: steven0461 01 September 2010 10:32:27PM *  2 points [-]

People sometimes use "Aumann's agreement theorem" to mean "the idea that you should update on other people's opinions", and I agree this is inaccurate and it's not what I meant to say, but surely the theorem is a salient example that implicitly involves such updating. Should I have said Geanakoplos and Polemarchakis?

Comment author: Wei_Dai 01 September 2010 11:47:07PM 2 points [-]

I think LWers have been using "Aumann agreement" to refer to the whole literature spawned by Aumann's original paper, which includes explicit protocols for Bayesians to reach agreement. This usage seems reasonable, although I'm not sure if it's standard outside of our community.

This community already hopefully accepts that one can learn from knowing other people's opinions without knowing their arguments

I'm not sure this is right... Here's what I wrote in Probability Space & Aumann Agreement:

But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.

Is there a result in the literature that shows something closer to your "one can learn from knowing other people's opinions without knowing their arguments"?

Comment author: James_Miller 01 September 2010 04:36:53AM *  0 points [-]

Eliezer has been accused of delusions of grandeur for his belief in his own importance. But if Eliezer is guilty of such delusions then so am I and, I suspect, are many of you.

Consider two beliefs:

  1. The next millennium will be the most critical in mankind’s existence because in most of the Everett branches arising out of today mankind will go extinct or start spreading through the stars.

  2. Eliezer’s work on friendly AI makes him the most significant determinant of our fate in (1).

Let 10^N represent the average across our future Everett branches of the total number of sentient beings whose ancestors arose on earth. If Eliezer holds beliefs (1) and (2) then he considers himself the most important of these beings and the probability of this happening by chance is 1 in 10^N. But if (1) holds then the rest of us are extremely important as well through how our voting, buying, contributing, writing… influences mankind’s fate. Let say that makes most of us one of the trillion most important beings who will ever exist. The probability of this happening by chance is 1 in 10^(N-12).

If N is at least 18 it’s hard to think of a rational criteria under which believing you are 1 in 10^N is delusional whereas thinking you are 1 in 10^(N-12) is not.

Comment author: JamesAndrix 01 September 2010 05:41:48AM 4 points [-]

2 is ambiguous. Getting to the stars requires a number of things to go right. Eliezer serves relatively little use in preventing a major nuclear exchange in the next 10 years, or bad nanotech , or garage made bio weapons, or even UFAI development.

FAI is just the final thing that needs to go right, everything else needs to go mostly right until then.

Comment author: Snowyowl 01 September 2010 11:19:59AM 2 points [-]

And I can think of a few ways humanity can get to the stars even if FAI never happens.

Comment author: KevinC 01 September 2010 05:02:39AM *  3 points [-]

Can you provide a cite for the notion that Eliezer believes (2)? Since he's not likely to build the world's first FAI in his garage all by himself, without incorporating the work of any of other thousands of people working on FAI and FAI's necessary component technologies, I think it would be a bit delusional of him to beleive (2) as stated. Which is not to suggest that his work is not important, or even among the most significant work done in the history of humankind (even if he fails, others can build on that and find the way that works). But that's different than the idea that he, alone, is The Most Significant Human Who Will Ever Live. I don't get the impression that he's that cocky.

Comment author: wedrifid 01 September 2010 04:58:45AM *  3 points [-]

If N is at least 18 it’s hard to think of a rational criteria under which believing you are 1 in 10^N is delusional whereas thinking you are 1 in 10^(N-12) is not.

Really? How about "when you are, in fact, 1/10^(N-12) and have good reason to believe it"? Throwing in a large N doesn't change the fact that 10^N is still 1,000,000,000,000 times larger than 10^(N-12) and nor does it mean we could not draw conclusions about belief (2).

(Not commenting on Eliezer here, just suggesting the argument is not all that persuasive to me.)