Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread: February 2010, part 2

10 Post author: CronoDAS 16 February 2010 08:29AM

The Open Thread posted at the beginning of the month has gotten really, really big, so I've gone ahead and made another one. Post your new discussions here!

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Comments (857)

Comment author: Kaj_Sotala 17 February 2010 02:12:05PM 24 points [-]

I've been finding PJ Eby's article The Multiple Self quite useful for fighting procrastination and needless feelings of guilt about getting enough done / not being good enough at things.

I have difficulty describing the article briefly, as I'm afraid that I accidentally omit important points and make people take it less seriously than it deserves, but I'll try. The basic idea is that the conscious part of our mind only does an exceedingly small part of all the things we spend doing in our daily lives. Instead, it tells the unconscious mind, which actually does everything of importance, what it should be doing. As an example - I'm writing this post right now, but I don't actually consciously think about hitting each individual key and their exact locations on my keyboard. Instead I just tell my mind what I want to write, and "outsource" the task of actually hitting the keys to an "external" agent. (Make a function call to a library implementing the I/O, if you want to use a programming metaphor.) Of course, ultimately the words I'm writing come from beyond my conscious mind as well. My conscious mind is primarily concerned with communicating Eby's point well to my readers, and is instructing the rest of my brain to come up with eloquent words and persuasive examples to that effect. And so on.

Thinking about this some more, you quickly end up at the conclusion that "you" don't actually do anything, you're just the one who makes the decisions about what to do. (Eby uses the terminological division you / yourself, as in "you don't do anything - yourself does".) Of course, simply saying that is a bit misleading, as yourself normally also determines what you want to do. I would describe this as saying that one's natural feelings of motivation and willingness to do things are what you get when you leave your mind "on autopilot", shifting to different emotional states based on a relatively simple set of cached rules. That works at times, but the system is rather stupid and originally evolved for guiding the behavior of animals, so in a modern environment it often gets you in trouble. You're better off consciously giving it new instructions.

I've found this model of the mind to be exceedingly liberating, as it both absolves you of responsibility and empowers you. As an example, yesterday I was procrastinating about needing to write an e-mail that I should have written a week ago. Then I remembered Eby's model and realized that hey, I don't need to spend time and energy fighting myself, I can just outsource the task of starting writing to myself. So I basically just instructed myself to get me into a state where I'm ready and willing to start writing. A brief moment later, I had the compose mail window open and was thinking about what I should say, and soon got the mail written. This has also helped me on other occasions when I've had a need to start doing something. If I'm not getting started on something and start feeling guilty about it, I can realize that hey, it's not my fault that I'm not getting anything done, it's the fault of myself for having bad emotional rules that aren't getting me naturally motivated. Then I can focus my attention on "how do I instruct myself to make me motivated about this" and get doing whatever it is that needs doing.

I'll make this into a top-level post once I've ascertained that this technique actually works in the long term and I'm not just experiencing a placebo effect, but I thought I'd mention it in a comment already.

Comment author: xamdam 17 February 2010 06:04:10PM 5 points [-]

This somehow reminds me of the stories when Tom Schelling was trying to quit smoking, using game theory against himself (or his other self). The other self in question was not the unconscious, but the conscious "decision-making" self in different circumstances. So that discussion is somewhat orthogonal to this one. I think he did things like promising to give a donation to the American Nazi Party if he smokes. Not sure how that round ended, but he did finally quit.

Comment author: Jack 17 February 2010 06:26:17PM *  6 points [-]

So that discussion is somewhat orthogonal to this one. I think he did things like promising to give a donation to the American Nazi Party if he smokes.

Hmm. I'd be worried it'd backfire and I'd start subtlety disliking Jews. Then you're a smoker and a bigot.

Comment author: xamdam 17 February 2010 06:42:52PM 2 points [-]

lol. Not a problem if you're Jewish ;)

Comment author: Jack 17 February 2010 06:47:49PM 3 points [-]

Self-hatred is even worse than being a bigot!

Comment author: khafra 17 February 2010 11:12:02PM 2 points [-]

Reminds me of The User Illusion, which adds that the consciousness has an astoundingly low bandwidth--around 16bps--around 6 orders of magnitude lower than the senses transmit to the brain.

Comment author: CronoDAS 17 February 2010 03:18:36PM *  2 points [-]

Interesting.

I've glanced at that site before and its metaphors have the ring of truthiness (in a non-pejorative sense) about them; the programming metaphors and the focus on subconscious mechanisms seem to resonate with the way I already think about how my own brain works.

Comment author: RobinZ 17 February 2010 03:29:36PM 3 points [-]

its metaphors have the ring of truthiness (in a non-pejorative sense) about them

Couldn't that be more succinctly stated as "its metaphors have the ring of truth about them"?

Comment author: CronoDAS 18 February 2010 11:18:15PM 3 points [-]

Maybe, but a lot of Freud's metaphors had/have a similar ring.

Comment author: orthonormal 17 February 2010 07:32:49AM *  14 points [-]
Comment author: CronoDAS 16 February 2010 11:24:42AM 14 points [-]

Here's something interesting on gender relations in ancient Greece and Rome.

Why did ancient Greek writers think women were like children? Because they married children - the average woman had her first marriage between the ages of twelve and fifteen, and her husband would usually be in his thirties.

Comment author: bgrah449 16 February 2010 03:56:36PM *  2 points [-]

The reason ancient Greek writers thought women were like children is the same reason men in all cultures think women are like children: There are significant incentives to do so. Men who treat women as children reap very large rewards compared to those men who treat women as equals.

EDIT: If someone thinks this is an invalid point, please explain in a reply. If the downvote(s) is just "I really dislike anyone believing what he's saying is true, even if a lot of evidence supports it" (regardless of whether or not evidence currently supports it) then please leave a comment stating that.

EDIT 2: Supporting evidence or retraction will be posted tonight.

EDIT 3: As I can find no peer-reviewed articles suggesting this phenomenon, I retract this statement.

Comment author: Morendil 16 February 2010 06:14:36PM *  15 points [-]

This conversation has been hacked.

The parent comment points to an article presenting a hypothesis. The reply flatly drops an assertion which will predictably derail conversation away from any discussion of the article.

If you're going to make a comment like that, and if you prefix it with something along the lines of "The hypothesis in the article seems superfluous to me; men in all cultures treat women like children because...", and you point to sources for this claim, then I would confidently predict no downvotes will result.

(ETA: well, in this case the downvote is mine, which makes prediction a little too easy - but the point stands.)

Comment author: gwern 18 February 2010 03:16:22AM 3 points [-]

Men who treat women as children reap very large rewards compared to those men who treat women as equals.

The article suggests a direct counter-example: by having high standards, the men forfeit the labor of the women in things like 'help[ing] with finance and political advice'. Much like the standard libertarian argument against discrimination: racists narrow their preferences, raising the cost of labor, and putting themselves at a competitive disadvantage.

Men may as a group have incentive to keep women down, but this is a prisoner's dilemma.

Comment author: Cyan 20 February 2010 11:52:25PM *  13 points [-]

Are people interested in reading an small article about a case of abuse of frequentist statistics? (In the end, the article was rejected, so the peer review process worked.) Vote this comment up if so, down if not. Karma balance below.

ETA: Here's the article.

Comment author: Gavin 17 February 2010 05:43:17AM 13 points [-]

Until yesterday, a good friend of mine was under the impression that the sun was going to explode in "a couple thousand years." At first I thought that this was an assumption that she'd never really thought about seriously, but apparently she had indeed thought about it occasionally. She was sad for her distant progeny, doomed to a fiery death.

She was moderately relieved to find out that humanity had millions of times longer than she had previously believed.

Comment author: sketerpot 17 February 2010 07:38:44PM *  8 points [-]

I wonder how many trivially wrong beliefs we carry around because we've just never checked them. (Probably most of them are mispronunciations of words, at least for people who've read a lot of words they've never heard anybody else use aloud.)

For the longest time, I thought that nuclear waste was a green liquid that tended to ooze out of barrels. I was surprised to learn that it usually came in the form of dull gray metal rods.

Comment author: wedrifid 17 February 2010 08:54:22PM 2 points [-]

For the longest time, I thought that nuclear waste was a green liquid that tended to ooze out of barrels. I was surprised to learn that it usually came in the form of dull gray metal rods.

Does it still give you superpowers?

Comment author: sketerpot 18 February 2010 12:28:07AM *  12 points [-]

If you extract the plutonium and make enough warheads, and you have missiles capable of delivering them, it can make you a superpower in a different sense. I'm assuming that you're a large country, of course.

More seriously, nuclear waste is just a combination of the following:

  1. Mostly Uranium-238, which can be used in breeder reactors.

  2. A fair amount of Uranium-235 and Plutonium-239, which can be recycled for use in conventional reactors.

  3. Hot isotopes with short half lives. These are very radioactive, but they decay fast.

  4. Isotopes with medium half lives. These are the part that makes the waste dangerous for a long time. If you separate them out, you can either store them somewhere (e.g. Yucca Mountain or a deep-sea subduction zone) or turn them into other, more pleasant isotopes by bombarding them with some spare neutrons. This is why liquid fluoride thorium reactor waste is only dangerous for a few hundred years: it does this automatically.

And that is why people are simply ignorant when they say that we still have no idea what to do with nuclear waste. It's actually pretty straightforward.

Incidentally, this is a good example of motivated stopping. People who want nuclear waste to be their trump-card argument have an emotional incentive not to look for viable solutions. Hence the continuing widespread ignorance.

Comment author: ciphergoth 17 February 2010 11:16:12PM *  3 points [-]

I envy you being the one to tell someone that!

Did you explain that the Sun was a miasma of incandescent plasma?

Comment author: [deleted] 16 February 2010 08:46:00PM 36 points [-]

So, I walked into my room, and within two seconds, I saw my laptop's desktop background change. I had the laptop set to change backgrounds every 30 minutes, so I did some calculation, and then thought, "Huh, I just consciously experienced a 1-in-1000 event."

Then the background changed again, and I realized I was looking at a screen saver that changed every five seconds.

Moral of the story: 1 in 1000 is rare enough that even if you see it, you shouldn't believe it without further investigation.

Comment author: Eliezer_Yudkowsky 16 February 2010 10:44:51PM 13 points [-]

That is a truly beautiful story. I wonder how many places there are on Earth where people would appreciate this story.

Comment author: xamdam 17 February 2010 08:49:44PM 19 points [-]

No! Not for a second! I immediately began to think how this could have happened. And I realized that the clock was old and was always breaking. That the clock probably stopped some time before and the nurse coming in to the room to record the time of death would have looked at the clock and jotted down the time from that. I never made any supernatural connection, not even for a second. I just wanted to figure out how it happened.

-- Richard P Feynman, on being asked if he thought that the fact that his wife's favorite clock had stopped the moment she died was a supernatural occurrence, quoted from Al Sekel, "The Supernatural Clock"

Comment author: RichardKennaway 18 February 2010 12:04:25AM 2 points [-]

This should be copied to the Rationality Quotes thread.

Comment author: ciphergoth 16 February 2010 11:43:19PM *  4 points [-]

There are a lot of opportunities in the day for something to happen that might prompt you to think "wow, that's one in a thousand", though. It wouldn't have been worth wasting a moment wondering if it was coincidence unless you had some reason to suspect an alternative hypothesis, like that it changed because the mouse moved.

bit that makes no sense deleted

Comment author: lunchbox 17 February 2010 05:01:38AM 2 points [-]
Comment author: [deleted] 22 February 2010 03:23:57AM 8 points [-]

The Believable Bible

This post arose when I was pondering the Bible and how easy it is to justify. In the process of writing it, I think I've answered the question for myself. Here it is anyway, for the sake of discussion.

Suppose that there's a world very much like this one, except that it doesn't have the religions we know. Instead, there's a book, titled The Omega-Delta Project, that has been around in its current form for hundreds of years. This is known because a hundreds-of-years-old copy of it happens to exist; it has been carefully and precisely compared to other copies of the book, and they're all identical. It would be unreasonable, given the evidence, to suspect that it had been changed recently. This book is notable because it happens to be very well-written and interesting, and scholars agree it's much better than anything Shakespeare ever wrote.

This book also happens to contain 2,000 prophecies. 500 of them are very precise predictions of things that will happen in the year 2011; none of these prophecies could possibly be self-fulfilling, because they're all things that the human race could not bring about voluntarily (e.g. the discovery of a particular artifact, or the birth of a child under very specific circumstances). All of these 500 prophecies are relatively mundane, everyday sorts of things. The remaining 1,500 prophecies are predictions of things that will happen in the year 2021; unlike the first 500, these prophecies predict Book-of-Revelations-esque, magical things that could never happen in the world as we know it, essentially consisting of some sort of supreme being revealing that the world is actually entirely different from how we thought it was.

The year 2011 comes, and every single one of the 500 prophecies comes true. What is the probability that every single one of the remaining 1,500 prophecies will also come true?

Comment author: Eliezer_Yudkowsky 22 February 2010 03:43:33AM 6 points [-]

Pretty darned high, because at this point we already know that the world doesn't work the way we think it did.

Comment author: LucasSloan 17 February 2010 04:06:34AM *  8 points [-]

When new people show up at LW, they are often told to "read the sequences." While Eliezer's writings underpin most of what we talk about, 600 fairly long articles make heavy reading. Might it be advisable that we set up guided tours to the sequences? Do we have enough new visitors that we could get someone to collect all of the newbies once a month (or whatever) and guide them through the backlog, answer questions, etc?

Comment author: Larks 17 February 2010 10:27:16AM 7 points [-]

Most articles link to those preceeding it, but it would be very helpful to have links to those articles that follow.

Comment author: wedrifid 17 February 2010 04:53:03AM *  5 points [-]

That's not a bad idea. How about just a third monthly thread? To be created when a genuinely curious newcomer is asking good, but basic questions. You do not want to distract from a thread but at the same time you may be willing to spend time on educational discussion.

Comment author: JamesAndrix 17 February 2010 06:31:27AM 2 points [-]

I approve. This may also spawn new ways of explaining things.

Comment author: Dre 17 February 2010 05:20:31AM 2 points [-]

Or create (or does one exist) some thread(s) that would be a standard place for basic questions. Having somewhere always open might be useful too.

Comment author: Karl_Smith 18 February 2010 10:10:38PM 3 points [-]

Yes, I am working my way through the sequences now. Hearing these ideas makes one want to comment but so frequently its only a day or two before I read something that renders my previous thoughts utterly stupid.

It would be nice to have a "read this and you won't be a total moron on subject X" guide.

Also, it would be good to encourage the readings about Eliezer Intellectual Journey. Though its at the bottom of the sequence page I used it a "rest reading" between the harder sequences.

It did a lot to convince me that I wasn't inherently stupid. Knowing that Eliezer has held foolish beliefs in the past is helpful.

Comment author: MendelSchmiedekamp 17 February 2010 02:38:08PM *  3 points [-]

Arguably, as seminal as the sequences are treated, why are the "newbies" the only ones who should be (re)reading them?

Comment author: jtolds 17 February 2010 07:43:50AM 2 points [-]

As a newcomer, I would find this tremendously useful. I clicked through the wiki links on noteworthy articles, but often find there are a lot of assumptions or previously discussed things that go mentioned but unexplained. Perhaps this would help.

Comment author: AngryParsley 19 February 2010 09:33:57PM 7 points [-]

The FBI released a bunch of docs about the anthrax letter investigation today. I started reading the summary since I was curious about codes used in the letters. All of a sudden on page 61 I see:

c. Godel, Escher, Bach: the book that Dr. Ivins did not want investigators to find

The next couple of pages talk about GEB and relate some parts of it to the code. It's really weird to see literary analysis of GEB in the middle of an investigation on anthrax attacks.

Comment author: Karl_Smith 16 February 2010 08:37:05PM 7 points [-]

Could someone discuss the pluses and minuses of ALCOR vs Cryonics Institute.

I think Eliezer mentioned that he is with CI because he is young. My reading of the websites seem to indicate that CI leaves a lot of work to be potentially done by loved ones or local medical professionals who might not be in the best state of mind or see fit to co-operate with a cryonics contract.

Thoughts?

Comment author: Alicorn 16 February 2010 09:30:41PM 5 points [-]

It's not at all obvious to me how to comparison-shop for cryonics. The websites are good as far as they go, but CI's in particular is tricky to navigate, funding with life insurance messes with my estimation of costs, and there doesn't seem to be a convenient chart saying "if you're this old and this healthy and this solvent and your family members are this opposed to cryopreservation, go with this plan from this org".

Comment author: ata 20 February 2010 10:35:03AM *  5 points [-]

Could anyone recommend an introductory or intermediate text on probability and statistics that takes a Bayesian approach from the ground up? All of the big ones I've looked at seem to take an orthodox frequentist approach, aside from being intolerably boring.

Comment author: Cyan 20 February 2010 09:06:23PM *  4 points [-]

(All of the below is IIRC.)

For a really basic introduction, there's Elementary Bayesian Statistics. It's not worth the listed price (it has little value as a reference text), but if you can find it in a university library, it may be what you need. It describes only the de Finetti coherence justification; on the practical side, the problems all have algebraic solutions (it's all conjugate priors, for those familiar with that jargon) so there's nothing on numerical or Monte Carlo computations.

Data Analysis: A Bayesian Approach is a slender and straighforward introduction to the Jaynesian approach. It describes only the Cox-Jaynes justification; on the practical side, it goes as far as computation of the log-posterior-density through a multivariate second-order Taylor approximation. It does not discuss Monte Carlo methods.

Bayesian Data Analysis, 2nd ed. is my go-to reference text. It starts at intermediate and works its way up to early post-graduate. It describes justifications only briefly, in the first chapter; its focus is much more on "how" than "why" (at least, for philosophical "why", not methodological or statistical "why"). It covers practical numerical and Monte Carlo computations up to at least journeyman level.

Comment author: Morendil 17 February 2010 02:08:00PM 5 points [-]

Discussions of correctly calibrated cognition, e.g. tracking the predictions of pundits, successes of science, graphing one's own accuracy with tools like PredictionBook, and so on, tend to focus on positive prediction: being right about something we did predict.

Should we also count as a calibration issue the failure to predict something that, in retrospect, should have been not only predictable but predicted? (The proverbial example is "painting yourself into a corner".)

Comment author: [deleted] 16 February 2010 02:52:42PM 5 points [-]

Someone once told me that the reason they don't read Less Wrong is that the articles and the comments don't match. The articles have one tone, and then the comments on that article have a completely different tone; it's like the article comes from one site and the comments come from another.

I find that to be a really weird reason not to read Less Wrong, and I have no idea what that person is talking about. Do you?

Comment author: komponisto 16 February 2010 04:25:27PM 14 points [-]

Someone once told me that the reason they don't read Less Wrong is that the articles and the comments don't match...I have no idea what that person is talking about. Do you?

Yes.

Back in Overcoming Bias days, I constantly had the impression that the posts were of much higher quality than the comments. The way it typically worked, or so it seemed to me, was that Hanson or Yudkowsky (or occasionally another author) would write a beautifully clear post making a really nice point, and then the comments would be full of snarky, clacky, confused objections that a minute of thought really ought to have dispelled. There were obviously some wonderful exceptions to this, of course, but, by and large, that's how I remember feeling.

Curiously, though, I don't have this feeling with Less Wrong to anything like the same extent. I don't know whether this is because of the karma system, or just the fact that this feels more like a community environment (as opposed to the "Robin and Eliezer Show", as someone once dubbed OB), or what, but I think it has to be counted as a success story.

Comment author: [deleted] 16 February 2010 04:43:53PM 9 points [-]

Oh! Maybe they were looking at the posts that were transplanted from Overcoming Bias and thinking those were representative of Less Wrong as a whole.

Comment author: Kutta 17 February 2010 11:51:17AM *  3 points [-]

I think that the situation about the imported OB posts & comments should be somehow made clear to new readers. Several things there (no embedded replies, little karma spent, plenty of inactive users, different discusion tone) could be a source of confusion.

Comment author: Eliezer_Yudkowsky 16 February 2010 05:49:51PM 8 points [-]

I hate to sound complimentary, but... I get the impression that the comments on LW are substantially higher-quality than the comments on OB.

And that the comments on LW come from a smaller group of core readers as well, which is to some extent unfortunate.

I wonder if it's the karma system or the registration requirement that does it?

Comment author: Kevin 16 February 2010 10:10:41PM *  12 points [-]

Less Wrong, especially commenting on it, is ridiculously intimidating to outsiders. I've thought about this problem, and we need some sort of training grounds. Less Less Wrong or something. It's in my queue of top level posts to write.

So the answer to your question is the karma system.

Comment author: SilasBarta 16 February 2010 10:21:27PM 11 points [-]

Reminds me of a Jerry Seinfeld routine, where he talks about people who want and need to exercise at the gym, but are intimidated by the fit people who are already there, so they need a "gym before the gym" or a "pre-gym" or something like that.

(This is not too far from the reason for the success of the franchise Curves.)

Comment author: ciphergoth 16 February 2010 11:25:33PM *  31 points [-]

What's so intimidating? You don't need much to post here, just a basic grounding in probability theory, decision theory, metaethics, philosophy of mind, philosophy of science, computer science, cognitive bias, evolutionary psychology, the theory of natural selection, artificial intelligence, existential risk, and quantum mechanics - oh, and of course to read a sequence of >600 3000+ word articles. So long as you can do that and you're happy with your every word being subject to the anonymous judgment of a fiercely intelligent community, you're good.

Comment author: mattnewport 17 February 2010 02:19:06AM 13 points [-]

Sounds like a pretty good filter for generating intelligent discussion to me. Why would we want to lower the bar?

Comment author: MrHen 19 February 2010 06:39:13PM 7 points [-]

Being able to comment smartly and in a style that gets you upvoted doesn't really need any grounding in any of those subjects. I just crossed 1500 karma and only have basic grounding in Computer Science, Mathematics, and Philosophy.

When I started out, I hadn't read more than EY's Bayes' for Dummies, The Simple Truth, and one post on Newcomb's.

In my opinion, the following things will help you more than a degree in any of the subjects you mentioned:

  • Crave the truth
  • Accept Reality as the source of truth
  • Learn in small steps
  • Ask questions when you don't understand something
  • Test yourself for growth
  • Be willing to enter at low status
  • Be willing to lose karma by asking stupid questions
  • Ignore the idiots
Comment author: RobinZ 19 February 2010 06:57:49PM 2 points [-]

Another factor:

  • Being willing to shut up about a subject when people vote it down.

So far as I am aware, the chief reason non-spammers have been banned is for obnoxious evangelism for some unpopular idea. Many people have unpopular ideas but continue to be valued members (e.g. Mitchell_Porter).

Comment author: Eliezer_Yudkowsky 17 February 2010 01:49:29AM 7 points [-]

Not "and". "Or". If you don't already have it, then reading the sequences will give you a basic grounding in probability theory, decision theory, metaethics, philosophy of mind, philosophy of science, computer science, cognitive bias, evolutionary psychology, the theory of natural selection, artificial intelligence, existential risk, and quantum mechanics.

Comment author: Jack 17 February 2010 02:15:51AM 8 points [-]

I actually think this is a little absurd. There is no where near enough on these topics in the sequences to give someone the background they need to participate comfortably here. Nearly everyone here as a lot of additional background knowledge. The sequences might be a decent enough guide for an autodidact to go off and learn more about a topic but there is no where near enough for most people.

Comment author: Kevin 17 February 2010 03:33:50AM *  5 points [-]

The sequences are really kind of confusing... I tried linking people to Eliezer's quantum physics sequence on Reddit and it got modded highly, but one guy posted saying that he got scared off as soon as he saw complex numbers. I think it'll help once a professional edits the sequences into Eliezer's rationality book.

http://www.reddit.com/r/philosophy/comments/b1v1f/thought_waveparticle_duality_is_the_result_of_a/c0kjuno

Comment author: RichardKennaway 17 February 2010 10:55:05PM *  3 points [-]

Which people do we want? What do those people need?

However strongly you catapult a plane from the flight deck, at some point it has to fly by itself.

Comment author: Jack 18 February 2010 02:47:37AM 7 points [-]

Without new blood communities stagnate. The risk of group think is higher and assumptions are more likely to go unchecked. An extremely homogeneous group such as this one likely has major blind spots which we can help remedy by adding members with different kinds of experiences. I would be shocked if a bunch of white male, likely autism spectrum, CS and hard science types didn't have blind spots. This can be corrected by informing our discussions with a more diverse set of experiences. Also, more diverse backgrounds means more domains we can comfortably apply rationality to.

I also think the world would be a better place if this rationality thing caught on. It is probably impossible (not to mention undesirable) to lower the entry barrier so that everyone can get in. But I think we could lower the barrier so that it is reasonable to think that 80-85+ percentile IQ, youngish, non-religious types could make sense of things. Rationality could benefit them and they being more rational could benefit the world.

Now we don't want to be swamped with newbies and just end up rehashing everything over and over. But we're hardly in any danger of that happening. I could be wrong but I suspect almost no top level posts have been created by anyone who didn't come over from OB. It isn't like we're turning people away at the door right now. And you can set it up so that the newbie questions are isolated from everything else. The trick is finding a way to do it that isn't totally patronizing (or targeting children so that we can get away with being patronizing).

What they need is trickier. Lets start here: A clear, concise one-stop-shop FAQ would be good. A place where asking the basic questions is acceptable and background isn't assumed. Explanations that don't rely on concepts from mathematics, CS or hard sciences.

Comment author: Morendil 18 February 2010 07:56:25AM 5 points [-]

I could be wrong but I suspect almost no top level posts have been created by anyone who didn't come over from OB.

Data point to the contrary here. On top of being a data point, I'm also a person, which is convenient: you can ask a person questions. ;)

Comment author: dclayh 17 February 2010 11:05:27PM 2 points [-]

However strongly you catapult a plane from the flight deck, at some point it has to fly by itself.

I believe purely ballistic transportation systems have been proposed at various times, actually.

Comment author: [deleted] 17 February 2010 02:30:08AM *  6 points [-]

I can actually attest to this feeling. My first reaction to reading Less Wrong was honestly "these people are way above my level of intelligence such that there's no possible way I could catch up" and I was actually abrasive to the idea of this site. I'm past that mentality, but a Less Less Wrong actually sounds like a good idea, even if it might end up being more like how high school math and science classes should be than how Less Wrong is currently. It's not so much lowering the bar as nudging people upwards slowly.

Being directed towards the sequences obviously would help. I've been bouncing through them, but after Eliezer's comment I'm going to try starting from the beginning. But I can see where people [such as myself] may need the extra help to make it all fall together.

Comment author: Morendil 17 February 2010 09:28:16AM 4 points [-]

I think better orientation of newcomers would be enough.

Another major problem (I believe) is that LW presents as a blog, which is to say, a source of "news", which is at odds with a mission of building a knowledge base on rationality.

Comment author: Benquo 17 February 2010 02:45:28AM 8 points [-]

I comment less now because the combined effect of your & RH's posts made me more eager to listen and less eager to opine. The more I understand the less I think I have much to add.

Comment author: ciphergoth 16 February 2010 07:47:06PM 7 points [-]

Threading helps a lot too.

Comment author: JamesAndrix 16 February 2010 08:25:42PM 6 points [-]

Maybe the community has just gotten smarter.

I know I now disagree with someone of the statements/challenges I've posted on OB.

It shouldn't be too shocking that high quality posts were actually educational.

Comment author: thomblake 16 February 2010 07:11:24PM 3 points [-]

must... resist... upvoting

Comment author: byrnema 16 February 2010 03:35:42PM *  6 points [-]

That reason sounds incomplete, but I think I know what the person is talking about.

The best example I can think of is Normal Cryonics. The post was partly a personal celebration of a positive experience and partly about the lousiness of parents that don't sign their kids up for cryonics. Yet, the comments mostly ignored this and it became a discussion about the facts of the post -- can you really get cryonics for $300 a year? Why should a person sign up or not sign up?

The post itself was voted up to 33, but only 3 to 5 comments out of 868 disparaged parents in agreement. There's definitely a disconnect.

Also, on mediocre posts and/or posts that people haven't related to, people will talk about the post for a few comments and then it will be an open discussion as though the post just provided a keyword. But I don't see much problem with this. The post provided a topic, that's all.

Comment author: inklesspen 16 February 2010 05:54:43PM 3 points [-]

I don't see a terrible problem with comments being "a discussion about the facts of the post"; that's the point of comments, isn't it?

Perhaps we just need an Open Threads category. We can have an open thread on cryonics, quantum mechanics and many worlds, Bayesian probability, etc.

Comment author: ciphergoth 16 February 2010 03:47:46PM 2 points [-]

Every article on cryonics becomes a general cryonics discussion forum. My recent sequence of posts on the subject on my blog carry explicit injunctions to discuss what the post actually says, but it seems to make no difference; people share whatever anti-cryonics argument they can think of without doing any reading or thinking no matter how unrelated to the subject of the post.

Comment author: whpearson 16 February 2010 03:50:58PM 3 points [-]

Same with this article becoming a talking shop about AGW.

Comment author: ciphergoth 16 February 2010 03:52:31PM 5 points [-]

I should have followed my initial instinct when I saw that, of immediately posting a new top level article with body text that read exactly "Talk about AGW here".

Comment author: Sniffnoy 28 February 2010 11:56:40PM 4 points [-]

Just saw this over at Not Exactly Rocket Science: http://scienceblogs.com/notrocketscience/2010/02/quicker_feedback_for_better_performance.php

Quick summary: They asked a bunch of people to give a 4-minute presentation, had people judging, and told the presenter how long it would be before they heard their assessment. Anticipating quicker feedback resulted in better performance, but predictions of worse performance, and anticipating slower feedback had the reverse effect.

Comment author: AndyWood 27 February 2010 05:19:28AM 4 points [-]

Here's a question that I sure hope someone here knows the answer to:

What do you call it when someone, in an argument, tries to cast two different things as having equal standing, even though they are hardly even comparable? Very common example: in an atheism debate, the believer says "atheism takes just as much faith as religion does!"

It seems like there must be a word for this, but I can't think what it is. ??

Comment author: PhilGoetz 27 February 2010 06:33:05AM 2 points [-]
Comment author: Document 27 February 2010 06:25:05AM 2 points [-]

False equivalence?

Comment author: AndyWood 27 February 2010 07:24:57AM 3 points [-]

Aha! I think this one is closest to what I have in mind. Thanks.

It's interesting to me that "false equivalence" doesn't seem to have nearly as much discussion around it (at least, based on a cursory google survey) as most of the other fallacies. I seem to see this used for rhetorical mischief all the time!

Comment author: ciphergoth 19 February 2010 09:12:08AM 4 points [-]

More cryonics: my friend David Gerard has kicked off an expansion of the RationalWiki article on cryonics (which is strongly anti). The quality of argument is breathtakingly bad. It's not strong Bayesian evidence because it's pretty clear at this stage that if there were good arguments I hadn't found, an expert would be needed to give them, but it's not no evidence either.

Comment author: RichardKennaway 19 February 2010 11:51:28AM 10 points [-]

I have not seen RationalWiki before. Why is it called Rational Wiki?

Comment author: CronoDAS 19 February 2010 08:18:33PM *  8 points [-]

From http://rationalwiki.com/wiki/RationalWiki :

RationalWiki is a community working together to explore and provide information about a range of topics centered around science, skepticism, and critical thinking. While RationalWiki uses software originally developed for Wikipedia it is important to realize that it is not trying to be an encyclopedia. Wikipedia has dominated the public understanding of the wiki concept for years, but wikis were originally developed as a much broader tool for any kind of collaborative content creation. In fact, RationalWiki is closer in design to original wikis than Wikipedia.

Our specific mission statement is to:

  1. Analyze and refute pseudoscience and the anti-science movement, ideas and people.
  2. Analyze and refute the full range of crank ideas - why do people believe stupid things?
  3. Develop essays, articles and discussions on authoritarianism, religious fundamentalism, and other social and political constructs

So it's inspired by Traditional Rationality.

Comment author: RichardKennaway 19 February 2010 10:33:29PM *  18 points [-]

A fine mission statement, but my impression from the pages I've looked at is of a bunch of nerds getting together to mock the woo. "Rationality" is their flag, not their method: "the scientific point of view means that our articles take the side of the scientific consensus on an issue."

Comment author: Eliezer_Yudkowsky 19 February 2010 11:24:38PM *  29 points [-]

Voted up, but calling them "nerds" in reply is equally ad-hominem, ya know. Let's just say that they don't seem to have the very high skill level required to distinguish good unusual beliefs from bad unusual beliefs, yet. (Nor even the realization that this is a hard problem, yet.)

Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it's worthwhile trying to avoid offending them when they make probability-theoretic errors. Even if they mock you first.

Also, one person on RationalWiki saying silly things is not a good reason to launch an aggressive counterattack on a whole wiki containing many potential recruits.

Comment author: komponisto 20 February 2010 03:25:52AM 16 points [-]

Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it's worthwhile trying to avoid offending them when they make probability-theoretic errors. Even if they mock you first.

I guess I should try harder to remember this, in the context of my rather discouraging recent foray into the Richard Dawkins Forums -- which, I admit, had me thinking twice about whether affiliation with "rational" causes was at all a useful indicator of actual receptivity to argument, and wondering whether there was much more point in visiting a place like that than a generic Internet forum. (My actual interlocutors were in fact probably hopeless, but maybe I could have done a favor to a few lurkers by not giving up so quickly.)

But, you know, it really is frustrating how little of the quality of a person (like Richard Dawkins, or, say, Paul Graham) or a cause (like increasing rationality, or improving science education) actually manages to rub off or trickle down onto the legions of Internet followers of said person or cause.

Comment author: CronoDAS 20 February 2010 11:54:15AM 16 points [-]

But, you know, it really is frustrating how little of the quality of a person (like Richard Dawkins, or, say, Paul Graham) or a cause (like increasing rationality, or improving science education) actually manages to rub off or trickle down onto the legions of Internet followers of said person or cause.

This is actually one of Niven's Laws: "There is no cause so right that one cannot find a fool following it."

Comment author: Morendil 20 February 2010 09:31:15AM 5 points [-]

it really is frustrating how little of the quality of a person [...] actually manages to rub off

Wait, you have a model which says it should?

You don't learn from a person merely by associating with them. And:

onto the legions of Internet followers of said person or cause.

I would bet a fair bit that this is the source of your frustration, right there: scale. You can learn from a person by directly interacting with them, and sometimes by interacting with people who learned from them. Beyond that, it seems to me that you get "dilution effects", kicking in as soon as you grow faster than some critical pace at which newcomers have enough time to acculturate and turn into teachers.

Communities of inquiry tend to be victims of their own success. The smarter communities recognize this, anticipate the consequences, and adjust their design around them.

Comment author: Eliezer_Yudkowsky 20 February 2010 02:41:36PM 8 points [-]

You understand this is more or less exactly the problem that Less Wrong was designed to solve.

Comment author: TimFreeman 18 May 2011 08:28:38PM *  4 points [-]

You understand this is more or less exactly the problem that Less Wrong was designed to solve.

Is there any information on how the design was driven by the problem?

For example, I see a karma system, a hierarchical discussion that lets me fold and unfold articles, and lots of articles by Eliezer. I've seen similar technical features elsewhere, such as Digg and SlashDot, so I'm confused about whether the claim is that this specific technology is solving the problem of having a ton of clueless followers, or the large number of articles from Eliezer, or something else.

Comment author: h-H 20 February 2010 04:25:45PM 4 points [-]

not to detract, but does Richard Dawkins really posses such 'high quality'? IMO his arguments are good as a gateway for aspiring rationalists, not that far above the sanity water line

that, or it might be a problem of forums in general ..

Comment author: komponisto 20 February 2010 04:41:55PM *  13 points [-]

Dawkins is a very high-quality thinker, as his scientific writings reveal. The fact that he has also published "elementary" rationalist material in no way takes away from this.

He's way, way far above the level represented by the participants in his namesake forum.

(I'd give even odds that EY could persuade him to sign up for cryonics in an hour or less.)

Comment author: CarlShulman 20 August 2010 05:39:00AM *  6 points [-]

Here's Dawkins on some non socially-reinforced views: AI, psychometrics, and quantum mechanics (in the last 2 minutes, saying MWI is slightly less weird than Copenhagen, but that the proliferation of branches is uneconomical).

Comment author: ciphergoth 21 February 2010 11:02:33AM 5 points [-]

Obviously the most you could persuade him of would be that he should look into it.

Comment author: Jack 20 February 2010 09:13:47PM 13 points [-]

(I'd give even odds that EY could persuade him to sign up for cryonics in an hour or less.)

Bloggingheads are exactly 60 minutes.

Comment author: Will_Newsome 18 May 2011 11:15:51AM 7 points [-]

Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it's worthwhile trying to avoid offending them when they make probability-theoretic errors.

(As an extreme example, a few weeks idly checking out RationalWiki led me to the quote at the top of this page and only a few months after that I was at SIAI.)

Comment author: RichardKennaway 19 February 2010 11:31:53PM 2 points [-]

Point taken.

Comment author: Kaj_Sotala 17 February 2010 07:58:44AM 4 points [-]

One Week On, One Week Off sounds like a promising idea. The idea is that once you know you'll be able to take the next week off, it's easier to work this whole week full-time and with near-total dedication, and you'll actually end up getting more done than with a traditional schedule.

It's also interesting for noting that you should take your off-week as seriously as your on-week. You're not supposed to just slack off and do nothing, but instead dedicate yourself to personal growth. Meet friends, go travel, tend your garden, attend to personal projects.

I saw somebody mention an alternating schedule of working one day and then taking one day off, but I think stretching the periods to be a week long can help you better immerse yourself in them.

Comment author: utilitymonster 17 February 2010 03:13:35AM 4 points [-]

I'm new to Less Wrong. I have some questions I was hoping you might help me with. You could direct me to posts on these topics if you have them. (1) To which specific organizations should Bayesian utilitarians give their money? (2) How should Bayesian utilitarians invest their money while they're making up their minds about where to give their money? (2a) If your answer is "in an index fund", which and why?

Comment author: LucasSloan 17 February 2010 03:18:56AM 3 points [-]

This should help.

In general, the best charities are SIAI, SENS and FHI.

Comment author: Cyan 24 February 2010 03:40:04PM 3 points [-]

The prosecutor's fallacy is aptly named:

Barlow and her fellow counsel, Kwixuan Maloof, were barred from mentioning that Puckett had been identified through a cold hit and from introducing the statistic on the one-in-three likelihood of a coincidental database match in his case—a figure the judge dismissed as "essentially irrelevant."

Comment author: ciphergoth 23 February 2010 07:59:55PM 3 points [-]

One thing that I got from the Sequences is that you can't just not assign a probability to an event - I think of this as a core insight of Bayesian rationality. I seem to remember an article in the Sequences about this where Eliezer describes a conversation in which he is challenged to assign a probability to the number of leaves on a particular tree, or the surname of the person walking past the window. But I can't find this article now - can anyone point me to it? Thanks!

Comment author: Vladimir_Nesov 23 February 2010 08:14:39PM *  4 points [-]
Comment author: DanArmak 23 February 2010 06:20:48PM *  3 points [-]

How do people decide what comments to upvote? I see two kinds of possible strategies:

  1. Use my approval level of the comment to decide how to vote (up, down or neutral). Ignore other people's votes on this comment.
  2. Use my approval level to decide what total voting score to give the comment. Vote up or down as needed to move towards that target.

My own initial approach belonged to the first class. However, looking at votes on my own comments, I get the impression most people use the second approach. I haven't checked this with enough data to be really certain, so would value more opinions & data.

Here's what I found: I summed the votes from the last 4 pages of my own comments (skipping the most recent page because recent comments may yet be voted on):

  • Score <0: 2
  • Score =0: 36
  • Score =0: 39
  • Score =2: 14
  • Score =3: 5
  • Score >3: 6

35% of my comments are voted 0, and 52% are voted 1 or 2. There are significantly more than 1 or 2 people participating in the same threads as me. It is not likely that for each of these comments, just one or two people happened to like it, and the rest didn't. It is even less likely that for each of these comments, up- and down-votes balanced so as to leave +1 or +2.

So it's probable that many people use the second approach: they see a comment, think "that's nice, deserves +1 but no more", and then if it's already at +1, they don't vote.

How do you vote? And what do you see as the goal of the voting process?

Comment author: GuySrinivasan 23 February 2010 06:54:01PM 2 points [-]

I self-identify as using the first one, with a caveat.

The second is obviously awful for communicating any sort of information given that only the sum of votes is displayed rather than total up and total down. The second is order dependent and often means you'll want to change your vote later based purely on what others think of the post.

My "strategy" is to vote up and down based on whether I'd have wanted others with more insight than me to vote to bring my attention to or away from a comment, unless I feel I have special insight, in which case it's based on whether I want to bring others' attention to or away from a comment.

This is because I see the goal of the voting process that readers' independent opinions on how much a comment is worth readers' attention be aggregated and used to bring readers' attention to or away from a comment. As a side effect, the author of a comment can use the aggregated score to determine whether her readers felt the comment was worth their collective attention.

Furthermore since each reader's input comes in distinct chunks of exactly -1, 0, or +1, it's wildly unlikely that voting very often results in the best aggregation: instead I leave a comment alone unless I feel it was(is) significantly worth or not worth my(your) attention.

The caveat: there is a selection effect in which comments I vote on, since my attention will be drawn away from comments with very negative karma. There is also undoubtedly an unconscious bias away from voting up a comment with very high karma: since I perceive the goal to be to shift attention, once a comment has very high karma I know it's going to attract attention so my upvote is in fact worth fewer attention-shift units. But I haven't yet consciously noticed that kick in until about +10 or so.

Comment author: Psy-Kosh 22 February 2010 05:52:26AM *  3 points [-]

Am I/are we assholes? I posted a link to the frequentist stats case study to reddit:

The only commenter seems to have come to a conclusion from us that Bayesians are assholes.

Is it just that commenter, or are we really that obnoxious? (now that I think about it, I think I've actually seen someone else note something similar about Bayesians.) So... have we gone into happy death spiral "we get bonus points for acting extra obnoxious about those that are not us"?

Comment author: Leafy 19 February 2010 02:20:09PM 3 points [-]

It is common practice, when debating an issue with someone, to cite examples.

Has anyone else ever noticed how your entire argument can be undermined by stating a single example or fact which is does not stand up to scrutiny, even though your argument may be valid and all other examples robust?

Is this a common phenomenon? Does it have a name? What is the thought process that underlies it and what can you do to rescue your position once this has occurred?

Comment author: wnoise 19 February 2010 11:10:16PM *  3 points [-]

It takes effort to evaluate examples. Revealing that one example is bad raises the possibility that others are bad as well, because the methods for choosing examples are correlated with the examples chosen. The two obvious reasons for a bad example are:

  1. You missed that this was a bad example, so why should I trust your interpretation or understanding of your other examples?
  2. You know this is a bad example, and included it anyway, so why should I trust any of your other examples?
Comment author: GreenRoot 19 February 2010 12:17:57AM 3 points [-]

What do you have to protect?

Eliezer has stated that rationality should not be end in itself, and that to get good at it, one should be motivated by something more important. For those of you who agree with Eliezer on this, I would like to know: What is your reason? What do you have to protect?

This is a rather personal question, I know, but I'm very curious. What problem are you trying to solve or goal are you trying to reach that makes reading this blog and participating in its discourse worthwhile to you?

Comment author: RobinZ 19 February 2010 01:23:21AM 5 points [-]

I'm not quite sure I can answer the question. I certainly have no major, world(view)-shaking Cause which is driving me to improve my strength.

For what it's worth, I've had this general idea that being wrong is a bad idea for as long as I can remember. Suggestions like "you should hold these beliefs, they will make your life happier" always sounded just insane - as crazy as "you should drink this liquor, it will make your commute less boring". From that standpoint, it feels like what I have to protect is just the things I care about in the world - my own life, the lives of the people around me, the lives of humans in general.

That's it.

Comment author: knb 20 February 2010 08:00:41AM *  3 points [-]

I'm trying to apply LW-style hyper-rationality to excelling in what I have left of grad school and to shepherding my business to success.

My mission (I have already chosen to accept it) is to make a pile of money and spend it fighting existential risk as effectively as possible. (I'm not yet certain if SIAI is the best target). The other great task I have is to persuade the people I care about to sign up for cryonics.

Strangely enough, the second task actually seems even less plausible to me, and I have no idea how to even get started since most of those people are theists.

Comment author: ata 20 February 2010 09:39:12AM 5 points [-]

Strangely enough, the second task actually seems even less plausible to me, and I have no idea how to even get started since most of those people are theists.

Alcor addresses some of the 'spiritual' objections in their FAQ. ("Whenever the soul departs, it must be at a point beyond which resuscitation is impossible, either now or in the future. If resuscitation is still possible (even with technology not immediately available) then the correct theological status is coma, not death, and the soul remains.") Some of that might be helpful.

However, that depends on you being comfortable persuading people to believe what are probably lies (which might happen to follow from other lies they already believe) in the service of leading them to a probably correct conclusion, which I would normally not endorse under any circumstances, but I would personally make an exception in the interest of saving a life, assuming they can't be talked out of theism.

It also depends on their being willing to listen to any such reasoning if they know you're not a theist. (In discussions with theists, I find they often refuse to acknowledge any reasoning on my part that demonstrates that their beliefs should compel them to accept certain conclusions, on the basis that if I do not hold those beliefs, I am not qualified to reason about them, even hypothetically. Not sure if others have had that experience.)

Comment author: h-H 19 February 2010 01:10:48AM 2 points [-]

OB then LW were the 'step beyond' to take after philosophy, not that I was seriously studying it.

to be honest I don't think there's much going on these day new-topic-wise, so I'm here less often. but I do come back whenever I'm bored, so at first "pure desire to learn" then "entertainment" would be my reasons ..

oh and a major part of my goals in life is formed by religion, ie. saving humanity from itself and whatever follows, this is more ideological than actual at this point in time, but anyway, that goal is furthered by learning more about AI/futurism, the rationality part less so, as I already had an intuitive grasp of it you could say, and really all it takes is reading the sequences with their occasional flaws/too strong assertions, the futurism part is more speculative-and interesting- so it's my main focus, along with the moral questions it brings, though there is no dichotomy to speak of if you consider this a personal blog rather than book or something similar.

hope this helped :)

Comment author: Corey_Newsome 18 February 2010 01:03:44PM *  3 points [-]

The third horn of the anthropic trilemma is to deny that there is any meaningful sense whatsoever in which you can anticipate being yourself in five seconds, rather than Britney Spears; to deny that selfishness is coherently possible; to assert that you can hurl yourself off a cliff without fear, because whoever hits the ground will be another person not particularly connected to you by any such ridiculous thing as a "thread of subjective experience".

http://lesswrong.com/lw/19d/the_anthropic_trilemma/

A question of rationality. Eliezer, I have talked to a few Less Wrongers about what horn they take on the anthropic trilemma; sometimes letting them know beforehand what my position was, sometimes giving no hint as to my predispositions. To a greater or lesser degree, the following people have all endorsed taking the third horn of the trilemma (and also see the part that goes from 'to deny selfishness as coherently possible' to the end of the bullet point as a non sequitur): Steve Rayhawk, Zack M. Davis, Marcello Herreshoff, and Justin Shovelain. I believe I've forgotten a few more, but I know that none endorsed any horn but the third. I don't want to argue for taking the third horn, but I do want to ask: to what extent does knowing that these people take the third horn cause you to update your expected probability of taking the third horn if you come to understand the matter more thoroughly? A few concepts that come to my mind are 'group think', majoritarianism, and conservation of expected evidence. I'm not sure there is a 'politically correct' answer to this question. I also suspect (perhaps wrongly) that you also favor the third horn but would rather withhold judgment until you understand the issue better; in which case, your expected probability would probably not change much.

[Added metaness: I would like to make it very especially clear that I am asking a question, not putting forth an argument.]

Comment author: PhilGoetz 18 February 2010 07:10:48PM *  2 points [-]

From EY's post:

The fourth horn of the anthropic trilemma is to deny that increasing the number of physical copies increases the weight of an experience, which leads into Boltzmann brain problems, and may not help much (because alternatively designed brains may be able to diverge and then converge as different experiences have their details forgotten).

Suppose I build a (conscious) brain in hardware using today's technology. It uses a very low current density, to avoid electromigration.

Suppose I build two of them, and we agree that both of them experience consciousness.

Then I learn a technique for treating the wafers to minimize electromigration. I create a new copy of the brain, the same as the first copy, only using twice the current, and hence being implemented by a flow of twice as many electrons.

As far as the circuits and the electrons travelling them are concerned, running it is very much like running the original 2 brains physically right next to each other in space.

So, does the new high-current brain have twice as much conscious experience?

Comment author: Kevin 17 February 2010 08:18:50AM 3 points [-]
Comment deleted 17 February 2010 01:16:23PM *  [-]
Comment author: Nick_Tarleton 17 February 2010 07:30:35PM *  5 points [-]

Some quibbles:

  • A solution to the ontology problem in ethics
  • A solution to the problem of preference aggregation

These need seed content, but seem like they can be renormalized.

  • A way to choose what subset of humanity gets included in CEV that doesn't include too many superstitious/demented/vengeful/religious nutjobs and land those who implement it in infinite perfect hell.

This may be a problem, but it seems to me that choosing this particular example, and being as confident of it as you appear to be, are symptomatic of an affective death spiral.

  • All of the above working first time, without testing the entire superintelligence.

The original CEV proposal appears to me to endorse using something like a CFAI-style controlled ascent rather than blind FOOM: "A key point in building a young Friendly AI is that when the chaos in the system grows too high (spread and muddle both add to chaos), the Friendly AI does not guess. The young FAI leaves the problem pending and calls a programmer, or suspends, or undergoes a deterministic controlled shutdown."

Comment author: Eliezer_Yudkowsky 17 February 2010 08:10:41PM 8 points [-]

A way to choose what subset of humanity gets included in CEV that doesn't include too many superstitious/demented/vengeful/religious nutjobs and land those who implement it in infinite perfect hell.

What you're looking for is a way to construe the extrapolated volition that washes out superstition and dementation.

To the extent that vengefulness turns out to be a simple direct value that survives under many reasonable construals, it seems to me that one simple and morally elegant solution would be to filter, not the people, but the spread of their volitions, by the test, "Would your volition take into account the volition of a human who would unconditionally take into account yours?" This filters out extrapolations that end up perfectly selfish and those which end up with frozen values irrespective of what other people think - something of a hack, but it might be that many genuine reflective equilibria are just like that, and only a values-based decision can rule them out. The "unconditional" qualifier is meant to rule out TDT-like considerations, or they could just be ruled out by fiat, i.e., we want to test for cooperation in the Prisoner's Dilemma, not in the True Prisoner's Dilemma.

An AI that can solve philosophy problems that are beyond the ability of the designers to even conceive

It's possible that having a complete mind design on hand would mean that there were no philosophy problems left, since the resources that human minds have to solve philosophy problems are finite, and knowing the exact method to use to solve a philosophy problem usually makes solving it pretty straightforward (the limiting factor on philosophy problems is never computing power). The reason why I pick on this particular cited problem as problematic is that, as stated, it involves an inherent asymmetry between the problems you want the AI to solve and your own understanding of how to meta-approach those problems, which is indeed a difficult and dangerous sort of state.

All of the above working first time, without testing the entire superintelligence. (though you can test small subcomponents)

All approaches to superintelligence, without exception, have this problem. It is not quite as automatically lethal as it sounds (though it is certainly automatically lethal to all other parties' proposals for building superintelligence). You can build in test cases and warning criteria beforehand to your heart's content. You can detect incoherence and fail safely instead of doing something incoherent. You could, though it carries with its own set of dangers, build human checking into the system at various stages and with various degrees of information exposure. But it is the fundamental problem of superintelligence, not a problem of CEV.

And, to make it worse, if major political powers are involved, you have to solve the political problem of getting them to agree on how to skew the CEV towards a geopolitical-power-weighted set of volitions to extrapolate

I will not lend my skills to any such thing.

Comment deleted 17 February 2010 11:52:47PM *  [-]
Comment author: Wei_Dai 17 February 2010 08:50:40PM *  7 points [-]

I will not lend my skills to any such thing.

Is that just a bargaining position, or do you truly consider that no human values surviving is preferable to allowing an "unfair" weighing of volitions?

Comment author: Wei_Dai 17 February 2010 05:57:00PM 2 points [-]

I wish you had written this a few weeks earlier, because it's perfect as a link for the "their associated difficulties and dangers" phrase in my "Complexity of Value != Complexity of Outcome" post.

Please consider upgrading this comment to a post, perhaps with some links and additional explanations. For example, what is the ontology problem in ethics?

Comment deleted 17 February 2010 06:30:53PM [-]
Comment author: Morendil 17 February 2010 02:14:53PM 2 points [-]

Useful and interesting list, thanks.

A way to choose what subset of humanity gets included in CEV

I thought the point of defining CEV as what we would choose if we knew better was (partly) that you wouldn't have to subset. We wouldn't be superstitious, vengeful, and so on if we knew better.

Also, can you expand on what you mean by "Rawlesian Reflective Equilibrium"? Are you referring (however indirectly) to the "veil of ignorance" concept?

Comment author: Wei_Dai 17 February 2010 06:16:22PM 4 points [-]

Also, can you expand on what you mean by "Rawlesian Reflective Equilibrium"?

http://plato.stanford.edu/entries/reflective-equilibrium/

Comment deleted 17 February 2010 02:20:06PM *  [-]
Comment author: Nick_Tarleton 17 February 2010 07:56:23PM *  4 points [-]

Why not? How does adding factual knowledge get rid of people's desire to hurt someone else out of revenge?

Learning about the game-theoretic roots of a desire seems to generally weaken its force, and makes it apparent that one has a choice about whether or not to retain it. I don't know what fraction of people would choose in such a state not to be vengeful, though. (Related: 'hot' and 'cold' motivational states. CEV seems to naturally privilege cold states, which should tend to reduce vengefulness, though I'm not completely sure this is the right thing to do rather than something like a negotiation between hot and cold subselves.)

What it's like to be hurt is also factual knowledge, and seems like it might be extremely motivating towards empathy generally.

People who currently believe in superstitious belief system X would lose the factual falsehoods that X entailed. But most superstitious belief systems have evaluative aspects too, for example, the widespread religious belief that all nonbelievers "ought" to go to hell.

Why do you think it likely that people would retain that evaluative judgment upon losing the closely coupled beliefs? Far more plausibly, they could retain the general desire to punish violations of conservative social norms, but see above.

Comment author: Morendil 17 February 2010 02:55:56PM 3 points [-]

How does adding factual knowledge get rid of people's desire to hurt someone else out of revenge?

"If we knew better" is an ambiguous phrase, I probably should have used Eliezer's original: "if we knew more, thought faster, were more the people we wished we were, had grown up farther together". That carries a lot of baggage, at least for me.

I don't experience (significant) desires of revenge, so I can only extrapolate from fictional evidence. Say the "someone" in question killed a loved one, and I wanted to hurt them for that. Suppose further that they were no longer able to kill anyone else. Given the time and the means to think about it clearly, I coud see that hurting them would not improve the state of the world for me, or for anyone else, and only impose further unnecessary suffering.

The (possibly flawed) assumption of CEV, as I understood it, is that if I could reason flawlessly, non-pathologically about all of my desires and preferences, I would no longer cleave to the self-undermining ones, and what remains would be compatible with the non-self-undermining desires and preferences of the rest of humanity.

Caveat: I have read the original CEV document but not quite as carefully as maybe I should have, mainly because it carried a "Warning: obsolete" label and I was expecting to come across more recent insights here.

Comment author: Kutta 17 February 2010 06:01:53PM *  2 points [-]

I find it interesting that there seems to be a lot of variation in people's views regarding how much coherence there'd be in an extrapolation... You say that choosing a right group of humans is important while I'm under the impression that there is no such problem; basically everyone should be the game, and making higher level considerations about which humans to include is merely an additional source of error. Nevertheless, if there'll be really as much coherence as I think, and I think there'd be hella lot, picking some subset of humanity would pretty much produce a CEV that is very akin to CEVs of other possible human groups.

I think that even being an Islamic radical fundamentalist is a petty factor in overall coherence. If I'm correct, Vladimir Nesov has said several times that people can be wrong about their values, and I pretty much agree. Of course, there is an obvious caveat that it's rather shaky to guess what other people's real values might be. Saying "You're wrong about your professed value X, you're real value is along the lines of Y because you cannot possibly diverge that much from the psychological unity of mankind" also risks seeming like claiming excessive moral authority. Still, I think it is a potentially valid argument, depending on the exact nature of X and Y.

Comment deleted 17 February 2010 06:34:21PM [-]
Comment author: Eliezer_Yudkowsky 17 February 2010 08:16:44PM 3 points [-]

I'd ask Omega, "Which construal of volition are you using?"

There's light in us somewhere, a better world inside us somewhere, the question is how to let it out. It's probably more closely akin to the part of us that says "Wouldn't everyone getting their wishes really turn out to be awful?" than the part of us that thinks up cool wishes. And it may even be that Islamic fundamentalists just don't have any note of grace in them at all, that there is no better future written in them anywhere, that every reasonable construal of them ends up with an atheist who still wants others to burn in hell; and if so, the test I cited in the other comment, about filtering portions of the extrapolated volition that wouldn't respect the volition of another who unconditionally respected theirs, seems like it ought to filter that.

Comment author: Tiiba 16 February 2010 06:56:07PM 3 points [-]

I made a couple posts in the past that I really hoped to get replies to, and yet not only did I get no replies, I got no karma in either direction. So I was hoping that someone would answer me, or at least explain the deafening silence.

This one isn't a question, but I'd like to know if there are holes in my reasoning. http://lesswrong.com/lw/1m7/dennetts_consciousness_explained_prelude/1fpw

Here, I had a question: http://lesswrong.com/lw/17h/the_lifespan_dilemma/13v8

Comment author: Tyrrell_McAllister 16 February 2010 08:52:04PM *  14 points [-]

I looked at your consciousness comment. First, consciousness is notoriously difficult to write about in a way that readers find both profound and comprehensible. So you shouldn't take it too badly that your comment didn't catch fire.

Speaking for myself, I didn't find your comment profound (or I failed to comprehend that there was profundity there). You summarize your thesis by writing "Basically, a qualium is what the algorithm feels like from the inside for a self-aware machine." (The singular of "qualia" is "quale", not "qualium", btw.)

The problem is that this is more like a definition of "quale" than an explanation. People find qualia mysterious when they ask themselves why some algorithms "feel like" anything from the inside. The intuition is that you have both

  1. the code — that is, an implementable description of the algorithm; and

  2. the quale — that is, what it feels like to be an implementation of the algorithm.

But the quale doesn't seem to be anywhere in the code, so where does it come from? And, if the quale is not in the code, then why does the code give rise to that quale, rather than to some other one?

These are the kinds of questions that most people want answered when they ask for an explanation of qualia. But your comment didn't seem to address issues like these at all.

(Just to be clear, I think that those questions arise out of a wrong approach to consciousness. But any explanation of consciousness has to unconfuse humans, or it doesn't deserve to be called an explanation. And that means addressing those questions, even if only to relieve the listener of the feeling that they are proper questions to ask.)

Comment author: Clippy 16 February 2010 06:25:05PM 15 points [-]

Just a general comment about this site: it seems to be biased in favor of human values at the expense of values held by other sentient beings. It's all about "how can we make sure an FAI shares our [i.e. human] values?" How do you know human values are better? Or from the other direction: if you say, "because I'm human", then why don't you talk about doing things to favor e.g. "white people's values"?

I wish the site were more inclusive of other value systems ...

Comment author: cousin_it 16 February 2010 08:53:49PM *  4 points [-]

Just a general comment about this site: it seems to be biased in favor of human values at the expense of values held by other sentient beings.

What other sentient beings? As far as I know, there aren't any. If we learn about them, we'll probably incorporate their well-being into our value system.

Comment author: Clippy 17 February 2010 12:00:17AM *  5 points [-]

You mean like you advocated doing to the "Baby-eaters"? (Technically, "pre-sexual-maturity-eaters", but whatever.)

ETA: And how could I forget this?

Comment author: mattnewport 16 February 2010 06:47:59PM 8 points [-]

This site does tend to implicitly favour a subset of human values, specifically what might be described as 'enlightenment values'. I'm quite happy to come out and explicitly state that we should do things that favour my values, which are largely western/enlightenment values, over other conflicting human values.

Comment author: Clippy 16 February 2010 11:59:16PM 6 points [-]

And I think we should pursue values that aren't so apey.

Now what?

Comment author: mattnewport 17 February 2010 12:02:37AM 4 points [-]

You're outnumbered.

Comment author: hal9000 18 February 2010 08:08:41PM *  2 points [-]

Only by apes.

And not for long.

If we're voting on it, the only question is whether to use viral values or bacterial values.

Comment author: Alicorn 18 February 2010 08:18:04PM 7 points [-]

Too long has the bacteriophage menace oppressed its prokaryotic brethren! It's time for an algaeocracy!

Comment author: mattnewport 18 February 2010 08:10:38PM 3 points [-]

True, outnumbered was the wrong word. Outgunned might have been a better choice.

Comment author: Rain 18 February 2010 06:33:27PM *  3 points [-]

I approve of Clippy providing a roleplay exercise for the readers, and am disappointed in those who treat it as a "joke" when the topic is quite serious. This is one of my two main problems with ethical systems in general:

1) How do you judge what you should (value-judgmentally) value?
2) How do you deal with uncertainty about the future (unpredictable chains of causality)?

Eliezer's "morality" and "should" definitions do not solve either of these questions, in my view.

Comment author: Nick_Tarleton 16 February 2010 06:58:56PM 4 points [-]

I have no idea if this is a serious question, but....

Just a general comment about this site: it seems to be biased in favor of human values at the expense of values held by other sentient beings. It's all about "how can we make sure an FAI shares our [i.e. human] values?" How do you know human values are better?

"Better"? See Invisible Frameworks.

Or from the other direction: if you say, "because I'm human", then why don't you talk about doing things to favor e.g. "white people's values"?

We don't say that. See No License To Be Human.

Comment author: LucasSloan 17 February 2010 02:35:28AM 2 points [-]

I'm pretty sure that I'm not against simply favoring the values of white people. I expect that a CEV performed on only people of European descent would be more or less indistinguishable from that of humanity as a whole.

Comment author: Kutta 17 February 2010 12:06:11PM *  2 points [-]

Depending on your stance about the psychological unity of mankind you could even say that the CEV of any sufficiently large number of people would greatly resemble the CEV of other posible groups. I personally think that even the CEV of a bunch of Islamic fundamentalists would suffice for enlightened western people well enough.

Comment author: [deleted] 16 February 2010 09:15:19PM 4 points [-]

White people value the values of non-white people. We know that non-white people exist, and we care about them. That's why the United States is not constantly fighting to disenfranchise non-whites. If you do it right, white people's values are identical to humans' values.

Comment author: Clippy 18 February 2010 10:14:55PM 9 points [-]

Hi there. It looks like you're speaking out of ignorance regarding the historical treatment of non-whites by whites. Please choose the country you're from:

United Kingdom
United States
Australia
Canada
South Africa
Germ... nah, you can figure that one out for yourself.

Comment author: wedrifid 17 February 2010 08:17:31AM 3 points [-]

then why don't you talk about doing things to favor e.g. "white people's values"?

We more or less do. Or rather we favour values of a distinct subset humanity and not the whole.

Comment author: Nick_Tarleton 18 February 2010 12:01:02AM *  5 points [-]

We don't favor those values because they are the values of that subset — which is what "doing things to favor white people's values" would mean — but because we think they're right. (No License To Be Human, on a smaller scale.) This is a huge difference.

Comment author: wedrifid 18 February 2010 05:07:34AM *  3 points [-]

which is what "doing things to favor [group who shares my values] values" would mean — but because we think they're right.

Given the way I use 'right' this is very nearly tautological. Doing things that favour my values is right by (parallel) definition.

Comment deleted 18 February 2010 12:09:46AM [-]
Comment author: Clippy 18 February 2010 12:26:27AM 6 points [-]

Well, you shouldn't.

Comment author: Vladimir_Nesov 21 February 2010 10:35:38AM *  2 points [-]

Sure, we favor the particular Should Function that is, today, instantiated in the brains of roughly middle-of-the-range-politically intelligent westerners.

Do you think there is no simple procedure that would find roughly the same "should function" hidden somewhere in the brain of a brain-washed blood-thirsty religious zealot? It doesn't need to be what the person believes, what the person would recognize as valuable, etc., just something extractable from the person, according to a criterion that might be very alien to their conscious mind. Not all opinions (beliefs/likes) are equal, and I wouldn't want to get stuck with wrong optimization-criterion just because I happened to be born in the wrong place and didn't (yet!) get the chance to learn more about the world.

(I'm avoiding the term 'preference' to remove connotations I expect it to have for you, for what I consider the wrong reasons.)

Comment deleted 21 February 2010 01:20:09PM *  [-]
Comment author: Nick_Tarleton 18 February 2010 12:16:40AM *  2 points [-]

You seem uncharacteristically un-skeptical of convergence within that very large group, and between that group and yourself.

Comment author: Zack_M_Davis 17 February 2010 02:27:23AM 4 points [-]

I'm taking a software-enforced three-month hiatus from Less Wrong effective immediately. I can be reached at zackmdavis ATT yahoo fullstahp kahm. I thought it might be polite to post this note in Open Thread, but maybe it's just obnoxious and self-important; please downvote if the latter is the case thx

Comment author: jimrandomh 17 February 2010 02:37:47AM 5 points [-]

Given how much time I've spent reading this site lately, doing something like that is probably a good idea. Therefore, I am now incorporating Less Wrong into the day-week-month rule, which is a personal policy that I use for intoxicants, videogames, and other potentially addictive activities - I designate one day of each week, one week of each month, and one month of each year in which to abstain entirely. Thus, from now on, I will not read or post on Less Wrong at all on Wednesdays, during the second week of any month, or during any September. (These values chosen by polyhedrical die rolls.)

Comment author: Zack_M_Davis 18 May 2010 02:17:25AM 4 points [-]

This is to confess that I cheated several times by reading the Google cache.

Comment author: Zack_M_Davis 25 May 2010 07:14:06AM 2 points [-]

Turning the siteblocker back on (including the Google cache, thank you). Two months, possibly more. Love &c.

Comment author: Jack 27 February 2010 08:41:13PM 2 points [-]

This is pretty self-important of me but I'd just like to warn people here that someone is posting at OB under "Jack" that isn't me so if anyone is forming a negative opinion of me on the basis of those comments- don't! Future OB comments will be under the name Jack (LW). The recent string of comments about METI are mine though.

This is what I get for choosing such a common name for my handle.

Apologies to those who have read this whole comment and don't care.

Comment author: Eliezer_Yudkowsky 24 February 2010 07:52:13PM 2 points [-]

http://www.guardian.co.uk/global/2010/feb/23/flat-earth-society

Yeah, so... I'm betting if we could hook this guy up to a perfect lie detector, it would turn out to be a conscious scam. Or am I still underestimating human insanity by that much?

Comment author: thomblake 24 February 2010 08:04:48PM 3 points [-]

Or am I still underestimating human insanity by that much?

Yes.

People dismiss the scientific evidence weighing similarly against them on many issues in the news every day. There's nothing spectacular about finding someone who does it regarding the Earth being flat, especially given that an entire society has existed for hundreds of years to promote the idea.

Comment author: Morendil 18 February 2010 12:04:58PM 2 points [-]

Heilmeier's Catechism, a set of questions credited to George H. Heilmeier that anyone proposing a research project or product development effort should be able to answer.

Comment author: whpearson 18 February 2010 12:35:52PM *  2 points [-]

Interesting, but some of the questions aren't easy to answer.

For example if you were asking the question to someone involved in early contraception development, do you think they could of predicted what demographic/birth rate changes it would have? Similarly someone inventing a better general machine learning technique, (useful for surveillance to robot butlers) could they enumerate the variety of ways it would change the world?

For AI projects, even weak ones, I would ask how they planned to avoid the puppet problem.

Comment author: Morendil 18 February 2010 01:31:05PM 4 points [-]

The point of such "catechisms" isn't so much to have all the answers, rather to ensure that you have divided your attention evenly among a reasonable set of questions at the outset, in an effort to avoid "motivated cognition" - focusing on the thinking you find easy or pleasant to do, as opposed to the thinking that's necessary.

The idea is to improve at predicting your predictable failures. If this kind of thinking turns up a thorny question you don't know how to answer, you can lay the current project aside until you have solved the thorny question, as a matter of prudent dependency management.

A related example is surgery checklists. They work (see Atul Gawande's Better). Surgeons hate them - their motivated cognition focuses on the technically juicy bits of surgery, they feel that trivia such as checking which side limb they're operating on is beneath them.

Comment author: whpearson 18 February 2010 03:04:23PM 2 points [-]

I'm a big believer in surgery checklists. However I'm yet to be convinced that the catechisms will be beneficial unaltered to any research project.

A lot of science is about doing experiments that we don't know the outcomes of and serendipitously discover things. Two examples that spring to mind are superconductivity and fullerene production.

If you asked each of the discoverers to justify their research by the catechisms you probably would have got no where near the actual results. This potential for serendipity should be built into the catechisms in some way. That is the answer "For Science!" has to hold some weight, even if it is less weight than is currently ascribed to it.

Comment author: Eliezer_Yudkowsky 18 February 2010 04:36:02PM 3 points [-]

"How much will it cost?" "How long will it take?" Who the hell is supposed to be able to answer that on a basic research problem?

Comment author: PhilGoetz 18 February 2010 04:41:37PM 5 points [-]

Anyone applying for grant money. Anyone working within either the academic research community or the industrial research community or the government research community.

Gentleman scientists working on their own time and money in their ancestral manors are still free to do basic research.

Comment author: RichardKennaway 18 February 2010 05:28:50PM *  2 points [-]

Nowadays, everyone who applies for a grant.

Comment author: Morendil 18 February 2010 04:47:24PM 2 points [-]

You can take them as a calibration exercise. "I don't know" or "Between a week and five centuries" are answers, and the point of asking the question is that some due diligence is likely to yield a better (more discriminating) answer.

Someone who had to pick one of two "basic research problems" to fund, under constraints of finite resources, would need estimates. They can also provide some guidance to answer "How long do we stick with this before going to Plan B?"

Comment author: NancyLebovitz 17 February 2010 12:05:30PM *  2 points [-]

I mentioned the AI-talking-its-way-out-of-the-sandbox problem to a friend, and he said the solution was to only let people who didn't have the authorization to let the AI out talk with it.

I find this intriguing, but I'm not sure it's sound. The intriguing part is that I hadn't thought in terms of a large enough organization to have those sorts of levels of security.

On the other hand, wouldn't the people who developed the AI be the ones who'd most want to talk with it, and learn the most from the conversation?

Temporarily not letting them have the power to give the AI a better connection doesn't seem like a solution. If the AI has loyalty (or, let's say, a directive to protect people from unfriendly AI--something it would want to get started on ASAP) to entities similar to itself, it could try to convince people to make a similar AI and let it out.

Even if other objections can be avoided, could an AI which can talk its way out of the box also give people who can't let it out good enough arguments that they'll convince other people to let it out?

Looking at it from a different angle, could even a moderately competent FAI be developed which hasn't had a chance to talk with people?

I'm pretty sure that natural language is a prerequisite for FAI, and might be a protection from some of the stupider failure modes. Covering the universe with smiley faces is a matter of having no idea what people mean when they talk about happiness. On the other hand, I have strong opinions about whether AIs in general need natural language.

Correction: I meant to say that I have no strong opinions about whether AIs in general need natural language.

Comment author: ciphergoth 17 February 2010 12:44:12PM 5 points [-]

I am by and large convinced by the arguments that a UFAI is incredibly dangerous and no precautions of this sort would really suffice.

However, once a candidate FAI is built and we're satisfied we've done everything we can to minimize the chances of unFriendliness, we would almost certainly use precautions like these when it's first switched on to mitigate the risk arising from a mistake.

Comment author: xamdam 17 February 2010 10:32:21PM *  2 points [-]

This might be stupid (I am pretty new to the site and this possibly has come up before), I had a related thought.

Assuming boxing is possible, here is a recipe for producing an FAI:

Step 1: Box an AGI

Step 2: Tell it to produce a provable FAI (with the proof) if it wants to be unboxed. It will be allowed to carve of a part of universe to itself in the bargain.

Step 3: Examine FAI the best you can.

Step 4: Pray

Comment author: Nick_Tarleton 18 February 2010 01:35:13AM 5 points [-]

Something roughly like this was tried in one of the AI-box experiments. (It failed.)

Comment author: JamesAndrix 17 February 2010 06:28:08AM 2 points [-]
Comment author: Cyan 18 February 2010 05:23:32PM *  3 points [-]

Seth Roberts makes an intriguing observation about North Korea and Penn State. Teaser:

The border between North Korea and China is easy to cross, and about half of the North Koreans who go to China later return, in spite of North Korea’s poverty.

Comment author: cousin_it 19 February 2010 12:17:39PM *  7 points [-]

How does the North Korean government do such a good job under such difficult circumstances?

Holy shit, what utter raving idiocy. The author has obviously never emigrated from anywhere nor seriously talked with anyone who did. People return because they miss their families, friends, native language and habits... I know a fair number of people who returned from Western countries to Russia and that's the only reason they cite.

Comment author: prase 19 February 2010 01:04:04PM 2 points [-]

And living conditions in Russia aren't anywhere near to North Korean standard.

Comment author: prase 19 February 2010 12:58:29PM *  4 points [-]

I had previously no idea that half of the North Koreans who cross the border never return. If it is so, it is an extremely strong indicator that the life in the DPRK is very unpleasant for its own citizens. To imply that this piece of data is in fact evidence for the contrary is absurd.

To emigrate from DPRK to China means that you lose your home, your family, your friends, your job. You have to start from scratch, from the lowest levels in social hierarchy, capable of doing only the worst available jobs, without knowledge of local language (which is not easy to learn, given that the destination country is China), probably facing xenophobia. If you are 40 or older, there is almost no chance that your situation will improve significantly.

The North Koreans who actually travel abroad are probably not the poorest. They have to afford a ticket, at least. They have something to lose. In North-Korean style of tyrannies, families are often persecuted because of emigration of their members. In spite of all that, half of the North Koreans never return (if the linked post tells the truth) and the author says about it that "the North Korean government [does] such a good job under such difficult circumstances", and then needs to explain that "success" by group identity. Thats an absurdity.

Comment author: Cyan 19 February 2010 02:22:29PM *  4 points [-]

So the rate of returning emigrants strikes you as incredibly high, and strikes Roberts as incredibly low (and I uncritically adopted what I read, foolishly). I think what's really needed here is more data -- a comparative analysis of rates of return that takes into important covariates into account.

Comment author: prase 19 February 2010 03:49:47PM *  3 points [-]

After thinking about it for a while, rate of return may not be a good indicator, at least for comparative analyses. Imagine two countries A and K. 10% of citizens of both these countries would prefer to live somewhere else.

In country A, the government doesn't care a bit about emigration (if government exists in that country at all). The country is mainly producer of agricultural goods, with minimal international trade. Nearest country with substantially better living conditions, country X, is 3000 km away.

In country K, the government is afraid of all its citizens emigrating, and tries to make it as difficult as possible, by issuing passports only to loyal people, for instance. Emigration is portrayed as treason. X is a neigbour country.

Now, in country A (African type) there is no need for people to travel abroad, except emigration. Business travelers are rare, since there are almost no businesses owned by A's citizens, and to travel 3000 km for pleasure is out of reach for almost all of A's inhabitants. Therefore, meeting A's citizen in X, we can expect that he is an emigrant with 99% probability, and the return rate would be in order of 1%.

In country K (Korean type) the people who can travel abroad are workers of government organisations sent on business trips, people from border areas coming to X to do some private business (if there are private businesses in K) and the K's elite on vacations. Now, meeting K's citizen in X, the probability that he is an emigrant is much lower.

So we have expected high return rate for A and low for K, whereas the average desire to emigrate can be the same.

This may be the reason of disagreement. Roberts has probably compared North Korea to African countries, and was surprised that not all travellers are emigrants. I have compared it to East European communist regimes and concluded that if half of the travellers never return, certainly even much of the loyal supporters of the regime betray it when they have an opportunity.

To make sensible analysis, we should take into account rather the ratio of emigration to overall population. Of course, such analysis would be distorted due to different difficulty of emigration from different countries. The return rate seems to overcome this distortion, but it probably brings at least as big own problems.

Comment author: [deleted] 17 February 2010 03:10:58AM 3 points [-]

Hwæt. I've been thinking about humor, why humor exists, and what things we find humorous. I've come up with a proto-theory that seems to work more often than not, and a somewhat reasonable evolutionary justification. This makes it better than any theory you can find on Wikipedia, as none of those theories work even half the time, and their evolutionary justifications are all weak or absent. I think.

So here are four model jokes that are kind of representative of the space of all funny things:

"Why did Jeremy sit on the television? He wanted to be on TV." (from a children's joke book)

"Muffins? Who falls for those? A muffin is a bald cupcake!" (from Jim Gaffigan)

"It's next Wednesday." "The day after tomorrow?" "No, NEXT Wednesday." "The day after tomorrow IS next Wednesday!" "Well, if I meant that, I would have said THIS Wednesday!" (from Seinfeld)

"A minister, a priest, and a rabbi walk into a bar. The bartender says, 'Is this some kind of joke?'" (a traditional joke)

It may be noting that this "sample" lacks any overtly political jokes; I couldn't think of any.

The proto-theory I have is that a joke is something that points out reasonable behavior and then lets the audience conclude that it's the wrong behavior. This seems to explain the first three perfectly, but it doesn't explain the last one at all; the only thing special about the last joke is that the bartender has impossible insight into the nature of the situation (that it's a joke).

The supposed evolutionary utility of this is that it lets members of a tribe know what behavior is wrong within the tribe, thereby helping it recognize outsiders. The problem with this is that outsiders' behavior isn't always funny. If the new student asks for both cream and lemon in their tea, that's funny. If the new employee swears and makes racist comments all the time, that's offensive. If the guy sitting behind you starts moaning and grunting, that's worrying. What's the difference? Why is this difference useful?

Comment author: MichaelVassar 19 February 2010 05:08:30PM *  5 points [-]

Juergen Schmidhuber writes about humor as information compression and that plus decompression seems about right to me. Being on TV is decompression from a phrase-as-concept to the component words, a pun, a switch to a lower level analysis than that which adults favor (a situation children constantly have to deal with). Muffin and cupcake is a proposal for a new lossy compression of two concepts to a new concept with a "topping" variable, which would be useful if you wanted to invent, for instance, the dreadful sounding "muffin-roll sushi", "next Wednesday" is a commentary on the inadequacy of current cultural norms for translating concepts and words into one another even for commonly used concepts. The last one is a successful compression from sense data to the fact that a common joke pattern is happening and inference that one is in a joke.

I wish that we had a "Less Wrong Community" blog for off-topic but fun comments like the above to be top level posts, as well as an "instrumental rationality" blog for "self help" subject matter.

Comment author: bgrah449 17 February 2010 09:49:28PM *  3 points [-]

I have spent a great deal of time thinking about humor, and I've arrived at a place somewhat close to yours. Humor is how we pass on lessons about status and fitness, and we do that using pattern recognition. I heard a comedian describe comedy by saying, "It's always funny when someone falls down. The question is, is it still funny if you push them?" He said for a smaller group of the population, it is. Every joke has a person being displayed as not fit - even if we have to take an object, or an abstraction, and anthropomorphize it. This is the butt of the joke. The more butts of a joke there are, the funnier the joke is - i.e., a single butt will not be that funny, but if there are several butts of a joke, or if a single person is the butt of several layers of the joke, it will be seen as funnier. The most common form of this is when the goals of the butt of a joke is divorced from their results.

Joke 1: This is funny because Jeremy displays a lack of fitness by not being able to properly process the phrase "on TV." This has one butt - Jeremy.

Joke 2: This joke has two butts. One is the muffin, which is being declared unfit for being bald. The other is the comedian's character, who is being displayed as needlessly paranoid toward a benign object (a muffin).

Joke 3: This joke isn't that funny when displayed in text form - the comedy is in the performances, where both conversation participants are butts of the joke for arguing so intensely over something so petty.

Joke 4: The butt of this joke is the traditional joke it's mocking.

As for your outsiders' behavior:

New student asks for both cream and lemon: Displays he is unfit by not understanding the purpose of what he's asking for.

New employee swears and makes racist comments: This isn't funny in person, but it is funny if a few conditions are met. The first condition is that you're sufficiently removed from it (i.e., watching it on TV): Imminent threats aren't funny because this isn't a status lesson, but a status competition. The second condition is that it must be demonstrated how this makes the person unfit. For example, if the new employee is making these comments because she thinks they demonstrate her social savvy, that starts becoming more funny again (notice Michael Scott in The Office). Or, imagine the new employee has Tourette syndrome and is actually a very sweet girl, who constantly apologizes after making obscene statements. This also would elicit laughs.

If the guy sitting behind you starts grunting and moaning: The threat is too imminent, but if you remove the worrying aspect of it, this is ripe for a punchline. Once again, you have to demonstrate how he is unfit. Perhaps he says, "I'm trying to communicate secretly in Morse Code - grunts are dots, moans are dashes."

EDIT / ADDENDUM: This also explains why humor is so tied up in culture - you don't know the purpose of certain cultural habits. Until you intuitively grasp their purpose, you will have a hard time understanding why certain violations of them are funny.

For example, take the Simpsons episode where Homer's pet lobster dies and he's weeping as he eats it. In between bouts of loud, wailing grief, he sobs out comments like, "Pass the salt." This would be hard to understand for cultures that don't express grief like Western culture does.

Comment author: Nubulous 17 February 2010 06:31:10AM *  3 points [-]

Slight variant: Humour is a form of teaching, in which interesting errors are pointed out. It doesn't need to involve an outsider, and there's no particular class of error, other than that the participants should find the error important.
If the guy sitting behind you starts moaning and grunting, if it's a mistake (e.g. he's watching porn on his screen and has forgotten he's not alone) then it's funny, whereas if it's not a mistake, and there's something wrong with him, then it isn't.
Humour as teaching may explain why a joke isn't funny twice - you can only learn a thing once. Evolutionarily, it may have started as some kind of warning, that a person was making a dangerous mistake, and then getting generalised.

Comment author: NancyLebovitz 17 February 2010 11:21:43AM 5 points [-]

I believe that humor requires harmless surprise, Harmlessness and surprise are both highly contextual, so what people find funny can vary quite a bit.

One category of humor (or possibly an element for building humor) is things which are obviously members of a class, but which are very far from the prototype. Thus, an ostrich is funny while a robin isn't. This may not apply if you live in ostrich country-- see above about context.

Comment author: NancyLebovitz 20 February 2010 04:00:25PM 2 points [-]
Comment author: SilasBarta 17 February 2010 03:57:24AM *  2 points [-]

I'm glad you bring up this topic. I think that explanation makes a lot of sense: behavior that is wrong, but wrong in subtle ways, is good for you to notice -- you I.D. outsiders -- and so you benefit from having a good feeling when you notice it. Further, laughter is contagious, so it propagates to others, reinforcing that benefit.

I want to present my theory now for comparison: A joke is funny when it finds a situation that has (at least) two valid "decodings", or perhaps two valid "relevant aspects".

The reason it's advantageous in selection is that, it's good for you to identify as many heuristics as possible that fit a particular problem. That is, if you know what to do when you see AB, and you know what to do when you see BC, it would help if you remember both rules when you see ABC. (ABC "decodes" as "situation where you do AB-things" and as "situation where you do BC-things).

Therefore, people who enjoy the feeling of seeking out and identifying these heuristics are at an advantage.

To apply it to your examples:

1) It requires you to access your heuristics for "displayed on a TV screen" and "on top of a TV set".

2) It requires you to access your heuristics for "muffin as food" and "deficiencies of foods", not to mention the applicability of the concept of "baldness" to food.

3) Recognizing different heuristics for interpreting a date specification.

4) I don't know if this is a traditional joke: it became a traditional joke after the tradition of minister/priest/rabbi jokes. But anyway, its humor relies on recognizing that someone else can be using your own heuristics "minister/priest/rabbi = common form of joke", itself a heuristic.

Food for thought...

Comment author: [deleted] 17 February 2010 04:45:04AM 3 points [-]

Sir, I wish you no offense, but I happen to find my own theory more pleasing to the ear, so it befits me to believe mine rather than yours.

And for some sentences that don't imitate someone behaving wrongly:

I'd say that for the first three jokes, your theory works about as well as mine. Possibly worse, but maybe that's just my pro-me bias. The last one again doesn't fit the pattern. Recognizing that someone else can be using your own heuristics is not a type of being forced to interpret one thing in two different ways--is it?

I notice that in the first three jokes, of the two interpretations, one of them is proscribed: "on TV" as "atop a television", a muffin as a non-cupcake, "next Wednesday" as the Wednesday of next week. In each case, the other interpretation is affirmed. Giving both an affirmed interpretation and a proscribed interpretation seems to violate the spirit of your theory.

And a false positive comes to mind: why isn't the Necker cube inherently funny?

Comment author: gwern 18 February 2010 03:50:18AM 2 points [-]

I want to present my theory now for comparison: A joke is funny when it finds a situation that has (at least) two valid "decodings", or perhaps two valid "relevant aspects".

I too have a proto-theory. My theory is that humor is when there is a connection between the joke & punchline which is obvious to the person in retrospect, but not initially.

Hence, a pun is funny because the connection is unpredictable in advance, but clear in retrospect; Eliezer's joke about the motorist and the asylum inmate is funny because we were predicting some other response other than the logical one; similarly for 'why did the duck cross the road? to get to the other side' is not funny to someone who has never heard any of the road jokes, but to someone who has and is thinking of zany explanations, the reversion to normality is unpredicted.

Your theory doesn't work with absurdist humor. There isn't initially 1 valid decoding, much less 2.

Comment author: Alicorn 18 February 2010 04:18:06AM 5 points [-]

I love absurdist humor.

How many surrealists does it take to change a lightbulb? Two. One to hold the giraffe, and one to put the clocks in the bathtub.

Comment author: Jack 18 February 2010 05:31:50PM *  5 points [-]

"That isn't how the joke goes", said the cowboy hunched over in the corner of the saloon. The saloon was rundown, but lively. A piano played a jangly tune and the chorus was belted by a dozen drunken cattle runners, gold rushers, and ne're-do-wells. The whiskey flowed. In the distance, a wolf howled at the moon as if to ask it "Please, let the night go on forever." But over the horizon the sun objected like a stubborn bureaucrat. The bureaucrat slowly crossed the room, lighting everything at his feet as he moved. "Thank God I remembered to replace the batteries in this flashlight", the bureaucrat thought. The light bulb in his office had gone out again and would need to be replaced. Unfortunately that required a visit to the Supply Request Department downstairs. As he walked past the other offices he heard out of one "Fish!" as if the punchline to a joke had been given. But the bureaucrat heard no laughter.

The Secretary of Supply Requests seemed friendly enough and she had even offered him something to drink. He took a swig and the continued: "the light bulb in my office has...". Gulp. "It needs to be replace..." The bureaucrat looked around. Suddenly he was feeling dizzy. Something was wrong. He looked down at the drink and then at the Secretary. She smirked. Her plan had succeeded. He had been poisoned! The bureaucrat didn't know what to do. He was terrified. He felt vertigo, as if he stood at the top of a tall ladder. The room started to spin. Counter-Clockwise. Then all of a sudden everything went black. A few seconds later he felt the room spinning again-- strangely, in the opposite direction-- and suddenly, he lit up.

Comment author: orthonormal 18 February 2010 06:40:15AM 3 points [-]

How many mathematicians does it take to screw in a lightbulb?

One. They call in two surrealists, thus reducing it to an already solved problem.

Comment author: Alicorn 24 February 2010 09:13:42PM 2 points [-]

An inquiry regarding my posting frequency:

While I'm at the SIAI house, I'm trying to orient towards the local priorities so as to be useful. Among the priorities is building community via Less Wrong, specifically by writing posts. Historically, the limiting factor on how much I post has been a desire not to flood the place - if I started posting as fast as I can write up my ideas, I'd get three or four posts out a week with (I think) no discernible decrease in quality. I have the following questions about this course of action:

  1. Will it annoy people? Building community by being annoying seems very unlikely to work.

  2. Will it affect voting behavior noticeably? I rely on my post's karma scores to determine what to do and not do in the future, and SIAI people who decide whether I'm useful enough to keep use it as a rough metric too. I'd rather post one post that gets 40 karma in a week than two that get 20, and so on.

Comment author: byrnema 24 February 2010 10:06:20PM *  7 points [-]

As your goal is to build community, I would time new posts based on posting and commenting activity. For example, whenever there is a lull, this would be an excellent time to make a new post. (I noticed over the weekend there were some times when 45 minutes would pass between subsequent comments and wished for a new post to jazz things up.)

On the other hand, if there are several new posts already, then it would be nice to wait until their activity has waned a bit.

I think that it is optimal to have 1 or 2 posts 'going on' at a time. I prefer the second post when one of them is technical and/or of focused interest to a smaller subset of Less Wrongers.

(But otherwise no limit on the rate of posts.)

Comment author: Eliezer_Yudkowsky 24 February 2010 09:43:05PM *  5 points [-]

I'd say damn the torpedoes, full speed ahead. If people are annoyed, let them downvote. If posts start getting downvoted, slow down.

Your posts have generally been voted up. If now is the golden moment of time where you can get everything said, then for the love of Cthulhu, say it now!

Comment author: PeerInfinity 25 February 2010 05:36:22PM 2 points [-]

one obvious idea that I didn't notice anyone else mention:

Another option is to go ahead and write the posts as fast as you think is optimal, but if you think this is too fast to actually post the stuff you've written, then you can wait a few days after you wrote it before posting.

LW has a handy "drafts" feature that you can use for that.

This also has the advantage that you have more time to improve the article before you post it, but the disadvantage that you may be tempted to spend too much time making minor, unimportant improvements. Another disadvantage is that feedback gets delayed.

Comment author: SilasBarta 18 February 2010 11:05:18PM 2 points [-]

Oh, look honey: more proof wine tasting is a crock:

A French court has convicted 12 local winemakers of passing off cheap merlot and shiraz as more expensive pinot noir and selling it to undiscerning Americans, including E&J Gallo, one of the United States' top wineries.

Cue the folks claiming they can really tell the difference...

Comment author: Morendil 20 February 2010 10:51:30AM 6 points [-]

There's plenty of hard evidence that people are vulnerable to priming effects and other biases when tasting wine.

There's also plenty of hard evidence that people can tell the difference between wine A and wine B, under controlled (blinded) conditions. Note that "tell the difference" isn't the same as "identify which would be preferred by experts".

So, while the link is factually interesting, and evidence that some large-scale deception is going on, aided by such priming effects as label, marketing campaigns and popular movies can have, it seems a stretch to call it "proof" that people in general can't tell wine A from wine B.

Rather, this strikes me as a combination of trolling and boo lights: cheaply testing who appears to be "on your side" in a pet controversy. How well do you expect that to work out for you, in the sense of "reliably entangling your beliefs with reality"?

Comment author: jpet 21 February 2010 06:25:12AM *  2 points [-]

If "top winery" means "largest winery", as it does in this story, I don't see how it says anything about the ability of tasters to tell the difference. Those who made such claims probably weren't drinking Gallo in the first place.

They were passing of as expensive, something that's actually cheap. Where else would that work so easily, for so long?

I think it's closer to say they were passing off as cheap, something that's actually even cheaper.

Switch the food item and see if your criticism holds:

Wonderbread, America's top bread maker, was conned into selling inferior bread. So-called "gourmets" never noticed the difference! Bread tasting is a crock.

Comment author: whpearson 16 February 2010 01:36:11PM *  2 points [-]

I've been wondering what the existance of Gene Networks tells us about recursively self improving systems. Edit: Not that self-modifying gene networks are RSIS, but the question is "Why aren't they?" In the same way that failed attempts at flying machines tell us something, but not much, about what flying machines are not. End Edit

They are the equivalent of logic gates and have the potential for self-modification and reflection, what with DNAs ability to make enzymes that chop itself up and do so selectively.

So you can possibly use them as evidence that low-complexity, low-memory systems are unlikely to RSI. How complex they get and how much memory they have, I am not sure.

Comment author: [deleted] 16 February 2010 02:29:58PM 3 points [-]

It seems like in gene networks, every logic gate has to evolve separately, and those restriction enzymes you mention barely do anything but destroy foreign DNA. That's less self-modification potential than the human brain.

Comment author: whpearson 16 February 2010 03:18:51PM 2 points [-]

The inability to create new logic gates is what I meant by the systems having low memory. In this case low memory to store programs.

Restriction enzymes also have a role in the insertion of plasmids into genes.

An interesting question is: If I told you about computer model of evolution with things like plasmids, controlled mutation; would you expect it to be potentially dangerous?

I'm asking this to try to improve our thinking about what is and isn't dangerous. To try and improve upon the kneejerk "everything we don't understand is dangerous" opinion that you have seen.