Open Thread: May 2010

3 Post author: Jack 01 May 2010 05:29AM

You know what to do.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Comments (543)

Comment author: Jack 01 May 2010 08:47:23AM *  24 points [-]

He who controls the karma controls the world.

Less Wrong dystopian speculative fiction: An excerpt.

JulXutil sat, legs crossed in the lotus position, at the center of the Less Wrong hedonist-utilitarian subreddit. Above him, in a foot-long green oval, was his karma total: 230450036. The subreddit was a giant room with a wooden floor and rice paper walls. In the middle the floor was raised, and then raised again to form a shallow step pyramid with bamboo staircases linking the levels. The subreddit was well lit. Soft light emanated from the rice paper walls as if they were lit from behind and Japanese lanterns hung from the ceiling.

Foot soldiers, users JerelYu and Maxine stood at the top of each staircase to deal with the newbies who wanted to bother the world famous JulXutil and to spot and downvote trolls before they did much damage. They also kept their eyes out for members of rival factions because while /lw/hedutil was officially public, every Less Wrong user knew this subreddit was Wireheader territory and had been since shortly after Lewis had published his famous Impossibility Proof for Friendliness. The stitched image of an envelope on JulXutil’s right sleeve turned red. He tapped it twice and the dojo disappeared and was replaced by his inbox. He tapped on the new message and its sender appeared before him.

Henry_Neil: Jul, I just heard from my source at Alcor. The procedure was successful. He’s been admitted. It'll go public in the morning.

JulXutil: Exciting, terrifying news. What will happen to his account?

Henry_Neil: It won't go anywhere. But users who haven’t logged in for thirty days don’t get counted when the server computes controlling karma. That leaves his 40% up for grabs.

JulXutil: How much support we end up with will depend on how organized the opposition is. We need full admin powers and enough backing to amend the constitution. Henry, I need you to take care of a few high karma players. They'd interfere with our plans. I’ll tell you whom. It’ll have to be timed just right. Contact me again when you've selected your men.

Henry_Neil: If the Blindsighters have heard the news they'll try the same thing. Your karmic reputation is in danger. Take precautions, stay out of the main subreddits, especially EvPsych. You’ll hear from me soon.

To be continued...

Comment author: Kutta 01 May 2010 09:52:56AM *  5 points [-]

This is golden. I demand continuation.

Comment author: Thomas 01 May 2010 03:42:03PM 2 points [-]

It's a real question where to, the Karma system leads. In a long run, we might see quite unexpected and unwanted results. But there is probably no other way to see that, than to wait where to it will actually go. I guess, a kind of conformism will prevail, if it hasn't already.

Comment author: cousin_it 02 May 2010 01:48:39PM 0 points [-]

The karma=wireheading angle is wonderful, and I think new.

Comment author: vinayak 01 May 2010 11:23:11AM 1 point [-]

I want to understand Bayesian reasoning in detail, in the sense that, I want to take up a statement that is relevant to our daily life and then try to find exactly how much should I believe in it based on the the beliefs that I already have. I think this might be a good exercise for the LW community? If yes, then let's take up a statement, for example, "The whole world is going to be nuked before 2020." And now, based on whatever you know right now, you should form some percentage of belief in this statement. Can someone please show me exactly how to do that?

Comment author: Jack 01 May 2010 06:30:42PM *  4 points [-]

Well to begin with we need a prior. You can choose one of two wagers. In the first, 1,000,000 blue marble and one red marble are put in a bag. You get to remove one marble, if it is the red one you win a million dollars. Blue you get nothing. In the second wager, you win a million dollars if a a nuclear weapon is detonated under non-testing and non-accidental conditions before 2020. Otherwise, nothing. In both cases you don't get the money until January 1st 2021. Which wager do you prefer?

If you prefer the nuke bet, repeat with 100,000 blue marbles, if you prefer the marbles try 100,000,000. Repeat until you get wagers that are approximately equal in their estimated value to you.

Edit: Commenters other than vinayak should do this too so that he has someone to exchange information with. I think I stop at maybe 200:1 against nuking.

Comment author: vinayak 01 May 2010 11:39:26PM 0 points [-]

So 200:1 is your prior? Then where's the rest of the calculation? Also, how exactly did you come up with the prior? How did you decide that 200:1 is the right place to stop? Or in other words, can you claim that if a completely rational agent had the same information that you have right now, then that agent would also come up with a prior of 200:1? What you have described is just a way of measuring how much you believe in something. But what I am asking is how do you decide how strong your belief should be.

Comment author: Jack 01 May 2010 11:57:57PM *  1 point [-]

It's just the numerical expression of how likely I feel a nuclear attack is. (ETA: I didn't just pick it out of thin air. I can give reasons but they aren't mathematically exact. But we could work up to that by considering information about geopolitics, proliferation etc.)

Or in other words, can you claim that if a completely rational agent had the same information that you have right now, then that agent would also come up with a prior of 200:1?

No, I absolutely can't claim that.

What you have described is just a way of measuring how much you believe in something. But what I am asking is how do you decide how strong your belief should be.

By making a lot of predictions and hopefully getting good at it while paying attention to known biases and discussing the proposition with others to catch your errors and gather new information. If you were hoping there was a perfect method for relating information about extremely complex propositions to their probabilities... I don't have that. If anyone here does please share. I have missed this!

But theoretically, if we're even a little bit rational the more updating we do the closer we should get to the the right answer (though I'm not actually sure we're even this rational). So we pick priors and go from there.

Comment author: Daniel_Burfoot 01 May 2010 11:56:38PM -2 points [-]

Can someone please show me exactly how to do that?

The problem with your question is that the event you described has never happened. Normally you would take a dataset and count the number of times an event occurs vs. the number of times it does not occur, and that gives you the probability.

So to get estimates here you need to be creative with the definition of events. You could count the number of times a global war started in a decade. Going back to say 1800 and counting the two world wars and the Napoleonic wars, that would give about 3/21. If you wanted to make yourself feel safe, you could count the number of nukes used compared to the number that have been built. You could count the number of people killed due to particular historical events, and fit a power law to the distribution.

But nothing is going to give you the exact answer. Probability is exact, but statistics (the inverse problem of probability) decidedly isn't.

Comment author: vinayak 02 May 2010 05:20:38AM 3 points [-]

Consulting a dataset and counting the number of times the event occured and so on would be a rather frequentist way of doing things. If you are a Bayesian, you are supposed to have a probability estimate for any arbitrary hypothesis that's presented to you. You cannot say that oh, I do not have the dataset with me right now, can I get back to you later?

What I was expecting as a reply to my question was something along the following lines. One would first come up with a prior for the hypothesis that the world will be nuked before 2020. Then, one would identify some facts that could be used as evidence in favour or against the hypothesis. And then one would do the necessary Bayesian updates.

I know how to do this for the simple cases of balls in a bin etc. But I get confused when it comes to forming beliefs about statements that are about the real world.

Comment author: Matt_Simpson 02 May 2010 06:01:15AM 2 points [-]

The answer is... its complicated, so you approximate. A good way of approximating is getting a dataset together and putting together a good model that helps explain that dataset. Doing the perfect Bayesian update in the real world is usually worse than nontrivial - its basically impossible.

Comment author: Mass_Driver 02 May 2010 06:07:34AM 3 points [-]

If you haven't already, you might want to take a look at Bayes Theorem by Eliezer.

As sort of a quick tip about where you might be getting confused: you summarize the steps involved as (1) come up with a prior, (2) identify potential evidence, and (3) update on the evidence. You're missing one step. You also need to check to see whether the potential evidence is "true," and you need to do that before you update.

If you check out Conservation of Expected Evidence, linked above, you'll see why. You can't update just because you've thought of some facts that might bear on your hypothesis and guessed at their probability -- if your intuition is good enough, your guess about the probability of the facts that bear on the hypothesis should already be factored into your very first prior. What you need to do is go out and actually gather information about those facts, and then update on that new information.

For example: I feel hot. I bet I'm running a fever. I estimate my chance of having a bacterial infection that would show up on a microscope slide at 20%.

I think: if my temperature were above 103 degrees, I would be twice as likely to have a bacterial infection, and if my temperature were below 103 degrees, I would only be half as likely to have a bacterial infection. Considering how hot I feel, I guess there's a 50-50 chance my temperature is above 103 degrees. I STILL estimate my chance of having a bacterial infection at 20%, because I already accounted for all of this. This is just a longhand way of guessing.

Now, I take my temperature with a thermometer. The readout says 104 degrees. Now I update on the evidence; now I think the odds that I have a bacterial infection are 40%.

The math is fudged very heavily, but hopefully it clarifies the concepts. If you want accurate math, you can read Eliezer's post.

Comment author: Morendil 02 May 2010 09:17:00AM *  3 points [-]

The interesting question isn't so much "how do I convert a degree of belief into a number", but "how do I reconcile my degrees of beliefs in various propositions so that they are more consistent and make me less vulnerable to Dutch books".

One way to do that is to formalize what you take that statement to mean, so that its relationships to "other beliefs" becomes clearer. It's what, in the example you suggest, the Doomsday clock scientists have done. So you can look at whatever data has been used by the Doomsday Clock people, and if you have reason to believe they got the data wrong (say, about international agreements), then your estimate would have to be different from theirs. Or you could figure out they forgot to include some evidence that is relevant (say, about peak uranium), or that they included evidence you disagree is relevant. In each of these cases Bayes' theorem would probably tell you at the very least in what direction you should update your degree of belief, if not the exact amount.

Or, finally, you could disagree with them about the structural relationships between bits of evidence. That case pretty much amounts to making up your own causal model of the situation. As other commenters have noted it's fantastically hard to apply Bayes rigorously to even a moderately sophisticated causal model, especially one that involves such an intricately interconnected system as human society. But you can always simplify, and end up with something you know is strictly wrong, but has enough correspondence with reality to be less wrong than a more naive model.

In practice, it's worth noting that only very seldom does science tackle a statement like this one head-on; as a reductionist approach science generally tries to explicate causal relationships in much smaller portions of the whole situation, treating each such portion as a "black box" module, and hoping that once this module's workings are formalized it can be plugged back into a more general model without threatening the overall model's validity too much.

The word "complex" is appropriate to refer precisely to situations where this approach fails, IMHO.

Comment author: MartinB 01 May 2010 01:02:29PM 2 points [-]

Question: How do you apply the rationalist ideas you learned on lesswrong in your own (professional and/or private) life?

Comment author: Bo102010 01 May 2010 02:52:08PM 4 points [-]

I remind myself of Conservation of Expected Evidence most days I'm at work.

I'm an engineer, and it helps remind me that a data point can either support a hypothesis or that hypothesis's opposite, but not both at once. This is especially useful for explaining things to non-technical people.

Comment author: ShardPhoenix 01 May 2010 03:40:01PM 2 points [-]

I think learning more about rationalization, akrasia, and so forth, has made it easier for me to keep regularly going to the gym, by noticing when I'm just making excuses for being lazy, etc.

Comment author: MartinB 01 May 2010 01:03:29PM *  11 points [-]

Question: Which strongly held opinion did you change in a notable way, since learning more about rationality/thinking/biases?

Comment author: gelisam 01 May 2010 01:35:45PM 4 points [-]

I started to believe in the Big Bang here. I was convinced by the evidence, but as this comment indicates, not by the strongest evidence I was given; rather, it was necessary to contradict the specific reasoning I used to disbelieve the Big Bang in the first place.

Is this typical? I think it would be very helpful if, in addition to stating which opinion you have changed, you stated whether the evidence convinced you because it was strong or because it broke the chain of thought which led to your pre-change opinion.

Comment author: Matt_Simpson 01 May 2010 08:51:36PM 6 points [-]

I'm no longer a propertarian/Lockean/natural rights libertarian. Learning about rationality essentially made me feel comfortable letting go of a position that I honestly didn't have a good argument for (and I knew it). The ev-psych stuff scared the living hell out of me (and the libertarianism* apparently).

*At least that sort of libertarianism

Comment author: MartinB 01 May 2010 09:09:54PM *  3 points [-]

To answer my own question:

  • changed political and economic views (similar to Matt).

  • changed views on the effects of Nutrition and activity on health (including the actions that follow from that)

  • changed view on the dangers of GMO (yet again)

  • I became aware of areas where I am very ignorant of opposing arguments, and try to counterbalance

  • I finally understand the criticisms about the skeptics movement

  • I repeatedly underestimated the amount of ignorance in the world, and got shocked when discovering that

And on the funnier side. Last week I found out that i learned a minor physics fact wrong. That was not a strongly held opinion, just a fact i never looked up again till now. For some reason i always was convinced that the volume increase in freshly frozen water is 10x, while its actually more like 9%

Comment author: mattnewport 01 May 2010 09:20:49PM 2 points [-]

For some reason i always was convinced that the volume increase in freshly frozen water is 10x, while its actually more like 9%

Have you never made ice cubes?

Comment author: Jack 01 May 2010 09:31:46PM 1 point [-]

For some reason i always was convinced that the volume increase in freshly frozen water is 10x, while its actually more like 9%

Not to hit you over the head with this, as I've noticed before how common it is that someone learns a random fact or two much later than they should. But, you never, say, made frozen popsicles? I mean a whole lot of havoc would get wreaked... imagine frozen pipes... water in cracks in the road...

Related to this subject, my sister was 14 before someone corrected her belief that "North" on a map corresponded to the sky above her head (which if you think about it is the intuitive interpretation when maps are placed vertically on classroom walls).

Comment author: MartinB 01 May 2010 09:44:26PM 1 point [-]

Both numbers serve as an explanation for why tubes crack. I never did any visualization about it. (Its not that uncommon that people have inconsistent beliefs.) Iirc I read that fact in the Mickey Mouse magazine at the appropriate age, but never tried it myself.

Since reading about the memory bias I am deeply afraid to have false or corrupted memories, while also wanting to experience such an effect. Finding minor mistakes in my knowledge on physics is similarly disturbing. The content of the example itself doesn't really change anything about my life. But i am left wondering how many other mistakes I carry around.

Comment author: Jack 01 May 2010 09:50:30PM 0 points [-]
Comment author: MartinB 01 May 2010 09:52:38PM 6 points [-]

Additional side note: I am deeply troubled by the fact that all of the important things in my life happened by pure accident. I am generally happy with the development of ideas i hold true and dear so far, but wouldn't have minded some short cuts. There is no clearcut path that has me ending up in the place I would want to be in, and I do not see anything systematic I can do about that. I don't 'choose' to become a rationalist or not, instead I get sucked in by interesting articles that carry ideas i find pleasant. But it would have been equally likely that i spent the weeks reading OB/LW initially on tvtropes instead. I recently checked on an atheist board for good recommendations on rational thought. (Considering that my path down to science started with the reasoned atheism bit) and was shocked by the lack of anything that resembled even a reasonable recommendation.

I don't like accidental developments.

Comment author: mattnewport 01 May 2010 09:54:31PM 7 points [-]

Do you have any scientific/engineering training? A habit I note that people with such training tend to develop is to do a little mental arithmetic when confronted with some new numerical 'fact' and do some basic sanity checking against their existing beliefs. I often find when I am reading a news story that I notice some inconsistency in the numbers presented (something as simple as percentages for supposedly mutually exclusive things adding up to more than 100 for example) that I am amazed slipped past both the writer and the editor. The fact that most journalists lack any real scientific or engineering training is probably the reason for this. This ice 'fact' should have been immediately obviously wrong to someone applying this habit.

It's perfectly understandable if this is just one of those things you picked up as a child and never had any cause to examine but it is indicative of a common failing and I would suggest that as a rule developing this 'engineering mindset' is valuable for any aspiring rationalist regardless of whether their job involves the routine application of such habits.

Comment author: MartinB 01 May 2010 10:09:19PM 0 points [-]

I am in the finial stages of becoming a computer scientist so: 'no'.

In school I had physics as one of the depend subjects. I don't think I saw any actual science training anywhere in my education. But that might be due to my own ignorance.

I still do not do math as often as I should, but sometimes.

What might have contributed to sustaining the mistake is my very early knowledge on the mistakes in intuitive judging of scaling volumes.

I should really milk this mistake for systematic causes....

Comment author: mattnewport 01 May 2010 10:14:33PM 2 points [-]

In school I had physics as one of the depend subjects. I don't think I saw any actual science training anywhere in my education. But that might be due to my own ignorance.

Unfortunately this is not something that is generally taught well in high school science classes even though it would be of much more practical use to most students than what they are actually being taught. It is conveyed better in university science courses that have a strong experimental component and in engineering courses.

Comment author: MartinB 01 May 2010 10:35:41PM 2 points [-]

It might not be too surprising that i totally agree.

It CS we dont do that much experimentation. And i have some beef with the lack of teaching good ways to actually make software. I dont think the word 'Version control' was ever uttered somewhere.

Comment author: Vladimir_Golovin 02 May 2010 09:15:40AM 2 points [-]

As a result of reading this post, I uninstalled a 10-year old habit -- drinking a cup of strong coffee every morning. Now I drink coffee only when I feel that I need a short-term boost.

Comment author: NancyLebovitz 02 May 2010 10:10:18AM *  3 points [-]

Coffee and concentration experiment

Article about self-measurement

A few months ago, Barooah began to wean himself from coffee. His method was precise. He made a large cup of coffee and removed 20 milliliters weekly. This went on for more than four months, until barely a sip remained in the cup. He drank it and called himself cured. Unlike his previous attempts to quit, this time there were no headaches, no extreme cravings. Still, he was tempted, and on Oct. 12 last year, while distracted at his desk, he told himself that he could probably concentrate better if he had a cup. Coffee may have been bad for his health, he thought, but perhaps it was good for his concentration.

Barooah wasn’t about to try to answer a question like this with guesswork. He had a good data set that showed how many minutes he spent each day in focused work. With this, he could do an objective analysis. Barooah made a chart with dates on the bottom and his work time along the side. Running down the middle was a big black line labeled “Stopped drinking coffee.” On the left side of the line, low spikes and narrow columns. On the right side, high spikes and thick columns. The data had delivered their verdict, and coffee lost.

This doesn't mean you don't get a boost, but it might be worth checking.

Comment author: Vladimir_Golovin 02 May 2010 11:55:04AM 2 points [-]

My experience is quite similar to what is described in the first article -- no coffee leads to better concentration for me. The caffeine 'boost' I was talking about reduces my concentration but makes me more inclined to action -- I found it useful for breaking through procrastination periods. The effect of Red Bull on me is similar but more pronounced.

The effect seems to be physical, but I don't rule out placebo (and frankly, it's fine with me either way.)

Comment author: [deleted] 02 May 2010 12:56:37PM 13 points [-]

Theism. Couldn't keep it. In the end, it wasn't so much that the evidence was good -- it had always been good -- as that I lost the conviction that "holding out" or "staying strong" against atheism was a virtue.

Standard liberal politics, of the sort that involved designing a utopia and giving it to people who didn't want it. I had to learn, by hearing stories, some of them terrible, that you have no choice but to respect and listen to other people, if you want to avoid hurting them in ways you really don't want to hurt them.

Comment author: gelisam 02 May 2010 03:46:21PM 0 points [-]

could you link some of these stories, please? I am known to entertain utopian ideas from time to time, but if utopias really do hurt people, then I'd rather believe that they hurt people.

Comment author: [deleted] 02 May 2010 03:49:27PM 2 points [-]

Personal stories, from a friend, so no, there's no place to link them. Well-meaning liberals have either hurt, or failed to help, him and people close to him.

Comment author: xamdam 02 May 2010 04:23:13PM 1 point [-]

I recommend reading Blank Slate to get a good perspective on the Utopian issues; the examples (I was born in USSR) are trivial to come by, but the book will give you a mental framework to deal with the issues.

Comment author: Matt_Simpson 02 May 2010 04:48:33PM *  1 point [-]

Communism is one utopia that ended in disaster, see Rummel's Death by Government

Comment author: Liron 03 May 2010 03:21:19AM 2 points [-]

I just listened to UC Berkeley's "Physics for Future Presidents" course on iTunes U (highly recommended) and I thought, "Surely no one can take theism seriously after experiencing what it's like to have real knowledge about the universe."

Comment author: MartinB 03 May 2010 03:48:27PM 6 points [-]

Disagreed. My current opinion is that you can be a theist and combine that with pretty much any other knowledge. Eliezer points to Robert Aumann as an example. For someone that has theism hardcoded into their brain and treats it as a different kind of knowledge than physics there can be virtually no visible difference in everyday life from a normal a-theist. I think the problem is not so much the theism, but that people use it to base decisions on it.

Comment author: [deleted] 03 May 2010 04:01:07PM 3 points [-]

oh it's true. I know deeply religious scientists. Some of them are great scientists. Let's not get unduly snide about this.

Comment author: PhilGoetz 01 May 2010 03:40:23PM 3 points [-]

I was going thru the rationality quotes, and noticed that I always glanced at the current point score before voting. I wasn't able to not do that.

It might be useful to have a setting under which the points on a comment, and maybe also on a post, would be hidden until after you voted on it.

Comment author: NancyLebovitz 01 May 2010 04:05:53PM 1 point [-]

For that matter, it might be good to have the option of not automatically seeing one's karma score-- I think I give mine more attention than it's worth, and I can't not see it if it's at the top of the page.

Comment author: Rain 01 May 2010 04:15:00PM *  4 points [-]

Marcello posted an anti-kibitzer Greasemonkey script which does that. It'd be nice to have it as core functionality of the site though, yeah.

Comment author: Morendil 01 May 2010 04:46:14PM 6 points [-]

Been working on it - it's actually committed to the LW codebase - but not released yet due to browser issues. Finding a design that avoids those is more work, not sure when I can commit to taking it on.

Comment author: RobinZ 01 May 2010 04:16:29PM 0 points [-]

There's currently a Greasemonkey script - "no kibitz", I think (I'm browsing by mobile, hard to look things up) - that does this; someone said they were working on adding it to the codebase a while ago.

Comment author: anonym 02 May 2010 05:51:52AM 1 point [-]

If you just want to not see the scores, but still see the author names, then Marcello's script isn't appropriate, because I think that hides the author as well.

I just added a LessWrong Comment Score Tweaks script that can be used to hide/display scores or toggle between visible and hidden.

After you install greasemonkey and that script, the "user script commands" menu of greasemonkey on lesswrong pages will contain:

  • Toggle Comment Scores
  • Hide Comment Scores
  • Show Comment Scores

There are also some key bindings defined for these. They don't work for me, but they might work for others.

Comment author: Thomas 01 May 2010 05:55:06PM 3 points [-]

Question: How many of you, readers and contributers here on this site, actually do work on some (nontrivial) AI project?

Or have an intention to do that in the future?

Comment author: Kazuo_Thow 01 May 2010 06:24:12PM 1 point [-]

Count me as "having an intention to do that in the future". Although I'm currently just an undergraduate studying math and computer science, I hope to (within 5-10 years) start doing everything I can to help with the task of FAI design.

Comment author: Baughn 01 May 2010 06:52:27PM *  2 points [-]

I'm working on one as part of a game, where I'm knocking off just about every concept I've run into - goal systems, eurisko-type self-modifying code, AIXI, etc. I'll claim it's nontrivial because the game is, and I very much intend to make it unusually smart by game standards.

But that's not really true AI. It's for fun, as much as anything else. I'm not going to claim it works very well, if at all; it's just interesting to see what kind of code is involved.

(I have, nevertheless, considered FAI. There's no room to implement it, which was an interesting thing to discover in itself. Clearly my design is insufficiently advanced.)

Comment author: kpreid 01 May 2010 10:35:49PM 1 point [-]

I used to be interested in working on AI, but my current general understanding of the field indicates that for me to do anything worthwhile in the field would require acquiring a lot of additional knowledge and skills — or possibly having a different sort of mind. I am spending my time on other projects where I more readily see how I can do something worthwhile.

Comment author: Daniel_Burfoot 01 May 2010 11:42:52PM 1 point [-]

I am writing a book about a new approach to AI. The book is a roadmap, after I'm finished, I will follow the roadmap. That will take many years.

I have near-zero belief that AI can succeed without a major scientific revolution.

Comment author: NancyLebovitz 02 May 2010 12:34:42AM 2 points [-]

I'm interested in what sort of scientific revolution you think is needed and why.

Comment author: Daniel_Burfoot 02 May 2010 12:56:50AM 1 point [-]

Well... you'll have to read the book :-)

Here's a hint. Define a scientific method as any process by which reliable predictions can be obtained. Now observe that human children can learn to make very reliable predictions. So they must be doing some sort of science. But they don't make controlled experiments. So our current understanding of the scientific method must be incomplete: there is some way of obtaining reliable theories about the world other than the standard theorize/predict/test loop.

Comment author: NancyLebovitz 02 May 2010 01:17:57AM 3 points [-]

On the off-chance you haven't heard about this: Unconscious statistical processing in learning languages.

Comment author: kpreid 02 May 2010 02:56:26AM 2 points [-]

I am reminded of a phrase from Yudkowsky's An Intuitive Explanation of Bayes' Theorem, which I was rereading today for no particularly good reason:

What is the so-called Bayesian Revolution now sweeping through the sciences, which claims to subsume even the experimental method itself as a special case?

Comment author: Morendil 02 May 2010 09:31:45AM 1 point [-]

Define a scientific method as any process by which reliable predictions can be obtained

At the risk of being blunt, that sounds like a Humpty Dumpty move. There are many processes which yield reliable predictions that we don't call science, and many processes we identify as part of the scientific method which don't yield reliable predictions.

What you've said above can be re-expressed as "if we think theorize/predict/test is the only way to make reliable predictions about the world, then our current understanding of how to make reliable predictions is incomplete". Well, I agree. :)

Comment author: ata 03 May 2010 06:14:38AM 2 points [-]

Yes, I have an intention to do so, because I'm convinced that it is very important to the future of humanity. I don't quite know how I'll be able to contribute yet, but I think I'm smart and creative enough that I'll be able to acquire the necessary knowledge and thinking habits (that's the part I'm working on these days) and eventually contribute something novel, if I can do all that soon enough for it to matter.

Comment author: [deleted] 01 May 2010 06:25:05PM *  38 points [-]

Is anyone else here disturbed over the recent Harvard incident where Stephanie Grace's perfectly reasonable email where she merley expreses agnosticism over the posiblity that the well documented IQ differences between groups are partially genetic is worthy of harsh and inaccurate condemnation from the Harvard Law school dean?

I feel sorry for the girl since she trusted the wrong people (the email was alegedly leaked by one of her girlfriends who got into a dispute with her over a man). We need to be extra carefull to selfcensure any rationalist discusions about cows "everyone" agrees are holy. These are things I don't feel comfortable even discussing here since they have ruined many carrers and lives due to relentless persecution. Even recanting dosen't help at the end of the day, since you are a google away and people who may not even understand the argument will hate you intensly. Scary.

I mean surley everyone here agrees that the only way to discover truth is to allow all the hypothesies to stand on their own without giving them the privilige of supressing competition to a few. Why is our society so insane that this regurarly happens even concerning views that many relevant academics hold in private (or even the majority of if in certain fields if the polling is anon)?

PS Also why does the Dean equate inteligence with genetic superiority and imlicitly even worth as a person? This is a disturbing view since half by definition will always be below average. And we're all going to be terribly stupid compared to AIs in the near future, such implicit values are dangerus in the context of the time we may be living in.

Comment author: RobinZ 01 May 2010 06:54:21PM 0 points [-]

I agree with what you've written, with particular emphasis on the problem of privacy on the Internet (and off, for that matter).

Given that I don't even know who Stephanie Grace is, though, I think I don't care.

Comment author: [deleted] 01 May 2010 07:21:35PM *  3 points [-]

I think when arguing about really controversial things that don't fit your tribe's beliefs via email or online means its best to only use them to send sources and citations. Avoid comments, any comments whatsoever, perhaps even quotes or Galileo forbid boldening anything but the title.
Encourage people involved in the debate to do the same.

Keep any controversial conclusions gleaned from the data or endorsals of any paper from the electronic record.Then when you are private tell them, did you manage to read the Someguyson study I sent you in email 6#? When you've exausted the mailed links talk switch to gossip or the weather.

If the mail was leaked and they don't have you on record for saying anything forbidden except just mailing around sources, how exactly will they tar and feather you?

I can say, this mode of conversation is actually quite stimulating since I've engaged in it before but I've only tested it for noncontraversial and complex subjects. It lets you catch up on what starting points he is coming from as well as gives you time to cool off in heated arguments. It is something however that drags on for weeks so not really appropirate with strangers.

Comment author: Rain 01 May 2010 06:54:28PM 6 points [-]

Undiscriminating skepticism strikes again: here's the thread on the very topic of genetic IQ differences.

Comment author: [deleted] 01 May 2010 07:06:06PM 2 points [-]

Thanks for the link! I'm new here and really appreciate stuff to read up on since its mostly new to me. :)

Comment author: Jack 01 May 2010 07:12:13PM 8 points [-]

Oh good. Make it convenient for the guys running background searches.

Comment author: [deleted] 01 May 2010 07:37:32PM *  21 points [-]

Here is the leaked email by Stephanie Grace if anyone is interested.

… I just hate leaving things where I feel I misstated my position.

I absolutely do not rule out the possibility that African Americans are, on average, genetically predisposed to be less intelligent. I could also obviously be convinced that by controlling for the right variables, we would see that they are, in fact, as intelligent as white people under the same circumstances. The fact is, some things are genetic. African Americans tend to have darker skin. Irish people are more likely to have red hair. (Now on to the more controversial:)

Women tend to perform less well in math due at least in part to prenatal levels of testosterone, which also account for variations in mathematics performance within genders. This suggests to me that some part of intelligence is genetic, just like identical twins raised apart tend to have very similar IQs and just like I think my babies will be geniuses and beautiful individuals whether I raise them or give them to an orphanage in Nigeria. I don’t think it is that controversial of an opinion to say I think it is at least possible that African Americans are less intelligent on a genetic level, and I didn’t mean to shy away from that opinion at dinner.

I also don’t think that there are no cultural differences or that cultural differences are not likely the most important sources of disparate test scores (statistically, the measurable ones like income do account for some raw differences). I would just like some scientific data to disprove the genetic position, and it is often hard given difficult to quantify cultural aspects. One example (courtesy of Randall Kennedy) is that some people, based on crime statistics, might think African Americans are genetically more likely to be violent, since income and other statistics cannot close the racial gap. In the slavery era, however, the stereotype was of a docile, childlike, African American, and they were, in fact, responsible for very little violence (which was why the handful of rebellions seriously shook white people up). Obviously group wide rates of violence could not fluctuate so dramatically in ten generations if the cause was genetic, and so although there are no quantifiable data currently available to “explain” away the racial discrepancy in violent crimes, it must be some nongenetic cultural shift. Of course, there are pro-genetic counterarguments, but if we assume we can control for all variables in the given time periods, the form of the argument is compelling.

In conclusion, I think it is bad science to disagree with a conclusion in your heart, and then try (unsuccessfully, so far at least) to find data that will confirm what you want to be true. Everyone wants someone to take 100 white infants and 100 African American ones and raise them in Disney utopia and prove once and for all that we are all equal on every dimension, or at least the really important ones like intelligence. I am merely not 100% convinced that this is the case.

Please don’t pull a Larry Summers on me,

A few minor fallacies but overall quite respectable and even stimulating conversation nothing any reasonable person would consider should warrant ostracism. Note the reference to "disscused over Dinner". She was betrayed by someone she socialised with.

And yes I am violating my own advice by boldening that one sentence. ;) I just wanted to drive home how close she may be to a well meaning if perhaps a bit untactfull poster on Less Wrong. Again, we need to be carefull. What society considers taboo changes over time as well, so one must get a feel for where on the scale of forbidden a subject is at any time and where the winds of change are blowing before deciding whether to discuss it online. Something inoccus could cost you your job a decade or so in the future.

Edit: For anyone wondering what a "Larry Summers" is.

Comment author: arundelo 01 May 2010 08:02:59PM 13 points [-]
Comment author: Matt_Simpson 01 May 2010 08:46:36PM 2 points [-]

A few minor fallacies

Care to point them out?

Comment author: [deleted] 01 May 2010 09:31:26PM 2 points [-]

Most escape me right now but I do recall something that bothered me... She implicity uses stereotypes of African American behvariour and how they change over time as an indicator of the actuall change in violent behaviour.

I'm sure it correlates somewhat, but considering how much stronger changes in wider society where and how much people's interests regarding what it was best to have other people belive about Black behaviour changed over time I don't think you can base an argument on this either way.

Comment author: CronoDAS 01 May 2010 08:55:26PM 6 points [-]

One of the people criticizing the letter accused the letter writer of privileging the hypothesis - that it's only because of historical contingency (i.e. racism) that someone would decide to carve reality between "African-Americans" and "whites" instead of, say, "people with brown eyes" and "people with blue eyes". (She didn't use that exact phrase, but it's what she meant.)

Comment author: Rain 01 May 2010 09:38:39PM *  1 point [-]

I think it would be fascinating if people with blue eyes were more or less intelligent, when controlling for the variables, than people with brown eyes.

That said, I would expect a larger genetic variation when choosing between long-separated and isolated populations rather than eye colors.

Comment author: [deleted] 01 May 2010 10:27:34PM *  3 points [-]

I'm using eye color as an example here since CronoDAS mentioned it. Replace it with a particular gene, future time orientation, nose type or whatever. If society makes quantifiable claims about a particular category into which we slice up reality (ie Atheists are more likley to rape and murder!) an individual should have the right to either test or demand proof for this quantifiable claim.

Race is a pretty good proxy form which populations your ancestors came from. Its not perfect since for example the Black race has the most genetic diversity and geneflow has increased after the rise of civilization and especially globalisation. Knowing however, whether for example most of your ancestors lived outside of Africa for the last 60,000 thousand years or that your group of ancestors diverged from the other guys group of ancestors 40,000 thousand years ago is also relevant info.

I stole this graph from Razib's site (gene expression) for a quick reference of what current biology has to say about ancestral populations.

http://www.gnxp.com/wp/wp-content/uploads/2010/02/PIIS096098220902065X.gr2_.lrg_.jpg

Comment author: [deleted] 01 May 2010 10:01:20PM *  11 points [-]

Isn't nearly everything a social construct though? We can divide people based into two groups, those with university degrees and those without. People with them may tend to live longer or die earlier, they may earn more money or earn less, ect. We may also divide people into groups based on self identification, do blondes really have more fun than brunettes or do hipsters really feel superior to nonhipsters or do religious people have lower IQs than self-identified atheists ect Concepts like species, subspecies and family are also constructs that are just about as arbitrary as race.

I dosen't really matter in the end. Regardless of how we carve up reality, we can then proceed to ask questions and get answers. Suppose we decided to in 1900 take a global test to see whether blue eyed or brown eyed people have higher IQs. Lo and behold we see brown eyed people have higher IQs. But in 2050 the reverse is true. What happened? The population with brown eyes was heterogeneous and its demographics changed! However if we took skin cancer rates we would still see people with blue eyes have higher rates of skin cancer in both periods.

So why should we bother carving up reality on this racial metric and ask questions about it? For the same reason we bother to carve up reality on the family or gender metric. We base policy on it. If society was colour blind, there would be no need for this. But I hope everyone here can see that society isn't colour blind.

For example Affirmative action's ethical status (which is currently framed as a nesecary adjustment against biases and not reparations for past wrongs) depends on what the data has to about say about group differences.

If the data shows we people with blue eyes in our country have lower mean IQs when controlling for socioeconomic status and such, we shouldn't be accusing racism for their higher college drop out rates if the rates are what is to be expected when controlling for IQs. To keep this policy would mean to discriminate against competent brown eyed people. But if there are no difference well then the policy is justified unless it turns out there is another reason that has nothing to do with discrimination behind it.

I hope that you however agree that (regardless of what the truth of this particular matter is) someone should not be vilified for asking questions or proposing hypothesises regarding social constructs we have in place, regularly operate with and even make quantifiable claims about.

Comment author: Jack 01 May 2010 10:22:47PM 1 point [-]

Concepts like species, subspecies and faimily are also constructs that are just about as arbitrary as race.

This is a matter of much dispute and a lot of confusion. See here.

Comment author: [deleted] 01 May 2010 10:37:09PM *  1 point [-]

Thanks for the link, I'm reading it now.

I just want to clear up that I'm refering to species and subspecies in the biological sense in that sentence and family in the ordinary every day sense not to the category between order and genus.

Comment author: kim0 02 May 2010 12:52:36PM 3 points [-]

I wondered how humans are grouped, so I got some genes from the world, and did an eigenvalue analysis, and this is what i found:

http://kim.oyhus.no/EigenGenes.html

As you can see, humans are indeed clustered in subspecies.

Comment author: Jack 02 May 2010 06:27:38PM 0 points [-]

This doesn't demonstrate subspecies.

Comment author: CronoDAS 01 May 2010 10:49:52PM 0 points [-]

I didn't say I agreed.

Comment author: [deleted] 01 May 2010 11:14:24PM *  0 points [-]

I never said you did. :) Would you however agree with the sentiment of my last paragraph?

This thread of conversation is easily derailed since whether group differences exist isn't really its topic.

Comment author: CronoDAS 02 May 2010 01:47:15AM 0 points [-]

Yeah, I do...

Comment author: cupholder 02 May 2010 06:47:23PM *  0 points [-]

For example Affirmative action's ethical status (which is currently framed as a nesecary adjustment against biases and not reparations for past wrongs) depends on what the data has to about say about group differences.

Only if you accept that particular framing, I would have thought? If one chooses to justify affirmative action as reparations for past wrongs, 'what the data has to about say about group differences' won't change your opinion of affirmative action.

(ETA - Also.)

Comment author: [deleted] 03 May 2010 05:10:10PM *  4 points [-]

Of course one can do this. But then you get into the sticky issue of why should we group reparations based on race? Aren't the Chatolic Irish entitled to reparations for their mistreatment as immigrant labour and discrimination against them based on their religion if the same is true of the Chinese? Aren't Native Americans a bit more entitled to reparations than say Indian immigrants? Also why are African Americans descended from slaves not differenciated to those who have migrated to the US a generation ago (after the civil rights era)?

And how long should such reparations be payed? Indefinetly?

I hope that from the above you can see why there would need to be a new debate on affirmative action if one reframes it.

Comment deleted 02 May 2010 11:56:56PM [-]
Comment author: Nick_Tarleton 03 May 2010 12:13:30AM *  0 points [-]

IAWYC, but "Asians rule at math and science" seems to have a huge cultural basis, and it's at least no more obvious that it has a genetic component than that racial IQ gaps do.

Comment deleted 03 May 2010 12:41:05AM *  [-]
Comment author: [deleted] 03 May 2010 04:50:37PM *  3 points [-]

@Nick Tarleton:

Can you explain how you know Asian math acheivement is fully due to cultural bias? Haven't crossracial adpotion studies shown that adopted East Asian children do better than their white peers on IQ tests? I also remember hearing claims that generally Asians do beter on the visualspatial component of IQ tests than whites.

Edit: Originally adressed @Roko

Comment deleted 03 May 2010 06:24:00PM [-]
Comment author: timtyler 02 May 2010 12:17:43PM 8 points [-]

The Harvard incident is business as usual: http://timtyler.org/political_correctness/

Comment author: [deleted] 02 May 2010 12:39:50PM 24 points [-]

I'm a bit upset.

In my world, that's dinner-table conversation. If it's wrong, you argue with it. If it upsets you, you are more praiseworthy the more you control your anger. If your anti-racism is so fragile that it'll crumble if you don't shut students up -- if you think that is the best use of your efforts to help people, or to help the cause of equality -- then something has gone a little screwy in your mind.

The idea that students -- students! -- are at risk if they write about ideas in emails is damn frightening to me. I spent my childhood in a university town. This means that political correctness -- that is, not being rude on the basis of race or ethnicity -- is as deep in my bones as "please" and "thank you." I generally think it's a good thing to treat everyone with respect. But the other thing I got from my "university values" is that freedom to look for the truth is sacrosanct. And if it's tempting to shut someone up, take a few deep cleansing breaths and remember your Voltaire.

My own beef with those studies is that you cannot (to my knowledge) isolate the genetics of race from the experience of race. Every single black subject whose IQ is tested has also lived his whole life as black. And we have a history and culture that makes race matter. You can control for income and education level, because there are a variety of incomes and education levels among all races. You can control for home environment with adoption and twin studies, I guess. But you can't control for what it's like to live as a black person in a society where race matters, because all black people do. So I can't see how such a study can really ever isolate genetics alone. (But correct me if I'm missing something.)

Comment author: Jack 02 May 2010 06:56:50PM 5 points [-]

Since mixed racial background should make a difference in genes but makes only a small difference in the way our culture treats a person, if the IQ gap is the result of genetics we should see find that the those with mixed race backgrounds have higher IQs than those of mostly or exclusively African descent. This has been approximated with skin tone studies in the past and my recollection is that one study showed a slight correlation between lighter skin tone and IQ and the other study showed no correlation. There just hasn't been much research done and I doubt there will ever be much research (which is fine by me).

Comment author: NancyLebovitz 02 May 2010 07:05:49PM 4 points [-]

Afaik, skin tone, hair texture, and facial features make a large difference in how African Americans treat each other.

White people, in my experience, are apt to think of race in binary terms, but this might imply that skin tone affects how African Americans actually get treated.

Comment author: [deleted] 03 May 2010 02:39:27PM *  3 points [-]

I'm still not confident because we're not, as Nancy mentioned, completely binary about race even in the US.

What you'd really need to do is a comparative study between the US and somewhere like Brazil or Cuba, which had a different history regarding mixed race. (The US worked by the one-drop-of-blood rule; Spanish and Portuguese colonies had an elaborate caste system where more white blood meant more legal rights.) If it's mainly a cultural distinction, we ought to see a major difference between the two countries -- the light/dark gap should be larger in the former Spanish colony than it is in the US. If culture doesn't matter much, and the gap is purely genetic, it should be the same all around the world.

The other thing I would add, which is easy to lose track of, is that this is not research that should be done exclusively by whites, and especially not exclusively by whites who have an axe to grind about race. Bias can go in that direction as well, and a subject like this demands extraordinary care in controlling for it. Coming out with a bad, politically motivated IQ study could be extremely harmful.

Comment author: NancyLebovitz 03 May 2010 03:04:41PM 1 point [-]

Minnesota Trans-Racial Adoption Study suggests that a lot of the difference is cultural and/or that white parents are better able to protect their children from the effects of prejudice.

I also have no idea what the practical difference of 4 IQ points might be.

I don't know where you'd find people who were interested enough in racial differences in intelligence to do major studies on it, but who didn't have preconceived ideas.

Comment author: Jack 03 May 2010 04:53:48PM -1 points [-]

The other thing I would add, which is easy to lose track of, is that this is not research that should be done exclusively by whites, and especially not exclusively by whites who have an axe to grind about race.

Frankly, I'm not sure why the research should be done at all.

Comment author: steven0461 02 May 2010 09:26:13PM *  5 points [-]

I think there's something to be said for not posting opinions such that 1) LW is likely to agree with the opinion, and 2) sites perceived as agreeing with the opinion are likely to be the target of hate campaigns.

Comment author: mattnewport 02 May 2010 10:35:02PM *  8 points [-]

This is the best exposition I have seen so far of why I believe strongly that you are very wrong.

On a Bus in Kiev

I remember very little about my childhood in the Soviet Union; I was only seven when I left. But one memory I have is being on a bus with one of my parents, and asking something about a conversation we had had at home, in which Stalin and possibly Lenin were mentioned as examples of dictators. My parent took me off the bus at the next stop, even though it wasn’t the place we were originally going.

Please read the whole thing and remember that this is where the road inevitably leads.

Comment author: Nick_Tarleton 02 May 2010 11:40:50PM *  12 points [-]

Yes, self-censorship is Prisoner's Dilemma defection, but unilaterally cooperating has costs (in terms of LW's nominal purpose) which may outweigh that (and which may in turn be outweighed by considerations having nothing to do with this particular PD).

Also, I think that's an overly dramatic choice of example, especially in conjunction with the word "inevitably".

Comment author: mattnewport 02 May 2010 11:53:42PM -1 points [-]

Also, I think that's an overly dramatic choice of example, especially in conjunction with the word "inevitably".

I don't, which is why I posted it.

In the end the Party would announce that two and two made five, and you would have to believe it. It was inevitable that they should make that claim sooner or later: the logic of their position demanded it. Not merely the validity of experience, but the very existence of external reality was tacitly denied by their philosophy. The heresy of heresies was common sense. And what was terrifying was not that they would kill you for thinking otherwise, but that they might be right. For, after all how do we know that two and two make four? Or that the force of gravity works? Or that the past is unchangeable? If both the past and the external world exist only in the mind, and if the mind itself is controllable – what then?

  • Winston Smith in George Orwell’s 1984
Comment author: steven0461 03 May 2010 03:15:21AM 8 points [-]

I'm sympathetic to this as a general principle, but it's not clear to me that LW doesn't have specific battles to fight that are more important than the general principle.

Comment author: Nick_Tarleton 02 May 2010 11:48:03PM *  5 points [-]

I share your concern. Literal hate campaigns seem unlikely to me, but such opinions probably do repulse some people, and make it considerably easier for us to lose credibility in some circles, that we might (or might not) care about. On the other hand, we pretty strongly want rationalists to be able to discuss, and if necessary slay, sacred cows, for which purpose leading by example might be really valuable.

Comment author: Morendil 02 May 2010 11:54:23PM -1 points [-]

I'm more directly disturbed by the bias present in your exposition: "perfectly reasonable", "merely expresses agnosticism", "well documented", "harsh and inaccurate".

Starting off a discussion with assorted applause lights and boo lights strikes me as unlikely to lead to much insight.

What would be likely to lead to useful insight? Making use of the tools LessWrong's mission is to introduce us to, such as the applications of Bayesian reasoning.

"Intelligence has a genetic component" strikes me as a causal statement. If it is, we ought to be able to represent it formally as such, tabooing the terms that give rise to cognitive muddles, until we can tell precisely what kind of data would advance our knowledge on that topic.

I've only just cracked open Pearl's Causality, and started playing with the math, so am still very much an apprentice at such things. (I have my own reasons to be fooling with that math, which are not related to the race-IQ discussion.) But it has already convinced me that probability and causality are deep topics which it's very easy to draw mistaken conclusions about if you rely solely on a layman's intuition.

For instance, "the well documented IQ differences between groups" are purely probabilistic data, which tell us very little about causal pathways generating the data, until and unless we have either controlled experiments, or further data sets which do discriminate between the competing causal models (only very grossly distinguished into "nature" and "nurture").

I don't know if the email you quoted (thanks for that, BTW, it's a treat to have access to a primary source without needing to chase it down) is racist, but it does sound very ignorant to me. It makes unwarranted inferential leaps, e.g. from "skin and hair color are definitely genetic" to "some part of intelligence is genetic", omitting the very different length of developmental chains leading from genes to pigmentation on the one hand, and intelligence on the other. It comes across as arrogant and elitist as well as ignorant when saying "I think my babies will be geniuses and beautiful individuals whether I raise them or give them to an orphanage in Nigeria".

It is not bad science to be on the lookout specifically for data that claims to be "scientific proof" of some old and demonstrably harmful prejudices, and to hold such claims to a higher standard. Just as we do hold claims of "scientific proof of ESP" to a higher standard - at least of scrutiny and replicability - than, say, claims of a correlation between apparel color and competitive performance. We have more reason to suspect ulterior motives in the former case than in the latter.

Comment author: Jack 03 May 2010 12:54:30AM *  19 points [-]

Dinnertime conversations between regular, even educated people do not contain probabilistic causal analyses. In the email Grace claimed something was a live possibility and gave some reasons why. Her argument was not of the quality we expect comments to have here at Less Wrong. And frankly, she does sound kind of annoying.

But that all strikes me as irrelevant compared to being made into a news story and attacked on all sides, by her dean, her classmates and dozens of anonymous bloggers. By the standards of normal, loose social conversation she did nothing deserving of this reaction.

I feel a chilling effect and I've only ever argued against the genetic hypothesis. Frankly, you should too since in your comment you quite clearly imply that you don't know for sure there is no genetic component. My take from the reaction to the email is that the only socially acceptable response to encountering the hypothesis is to shout "RACIST! RACIST!" at the top of your lungs. If you think we'd be spared because we're more deliberate and careful when considering the hypothesis you're kidding yourself.

Comment author: Morendil 03 May 2010 01:39:49AM 1 point [-]

By the standards of normal, loose social conversation she did nothing deserving of this reaction.

Sure. What I do find disturbing is how, knowing what she was doing (and who she was sending it to), the "friend" who leaked that email went ahead and did it anyway. That's positively Machiavellian, especially six months after the fact.

However, I do not feel a need to censure myself when discussing the race-IQ hypothesis. If intelligence has a genetic component, I want to see the evidence and understand how the evidence rules out alternatives. I would feel comfortable laying out the case for and against in an argument map, more or less as I feel comfortable laying out my current state of uncertainty regarding cryonics in the same format.

Neither do I feel a need to shout at the top of my lungs, but it does seem clear to me that racism was a strong enough factor in human civilization that it is necessary, for the time being, to systematically compensate, even at the risk of over-compensating.

"I absolutely do not rule out the possibility [of X]" can be a less than open-minded, even-handed stance, depending on what X you declare it about. (Consider "I absolutely do not rule of the possibility that I will wake up tomorrow with my left arm replaced by a blue tentacle.") Saying this and mistaking it for an "agnostic" stance is kidding oneself.

Comment author: [deleted] 03 May 2010 04:36:18PM *  4 points [-]

Since people are discussing group differences anyway. I would just like people to be a bit clearer in their phrasing.

Inteligence does have a genetic component. I hope no one argues that the cognitive difference between the average Chimpanzee and Resus monkey are result of nurture. The question is if there is any variation in the genetic component in Humans.

Studies have shown a high heritability for IQ, this dosen't nesecarily mean much of it is genetic but it does seem a strong position to take, especially considering results from twin studies. A good alternative explanation I can think of, that could be considered equivalent in explanatory power, would be differences in prenatal environment beyond those controled in previous studies (which could get sticky since such differences may also show group genetic variation ! for example the average lenght of pregnancy and risks associated with postterm complications does vary slightly between races).

The question disscused here however is whether there are any meaningfull differences between human groups regarding their genetic predispositions towards mental faculties.

We know quite a bit from genetic analysis about where people with certain markers have spread and which groups have been isolated. Therefore the real question we face is twofold:

  1. Just how really evolutionary recent is abstract thinking and other mental tricks the IQ test measures? The late advent of behavioral modernity compared vs. the early evidence of anatomically nearly modern could be considered for example. Some claim it was an evolutionary change following the well documented recent bottleneck of the Human species others say the advent of modern behaviour was a radical cultural adaptation to a abrupt environmental change or just part of a long and slow progress of rising population density and material culture complexity we haven't yet spotted. Considering how sketchy the archeological record is we can't be suprised at all if it turns out we've been wrong for decades and modern behvaiour isn't recent at all.

  2. Is the selective value of inteligence compared to other traits identical in all environments econuntered by Homo Sapiens? Remember we may already have some evidence that sometimes inteligence may not be that usefull for hominids depending on how we interpret the fossiles of Homo Floresiensis. Could this also be true of Homo Sapiens population as well?

The answers to these two questions would tell us how likley it would be to see these differences appear and how noticeable they may be in the time window current biology estimates we have for differences between populations to occur.

Note: This from Razib Khan's site (Gene Expression), I'm reposting it here so you don't need to hunt it down in my other post. http://www.gnxp.com/wp/wp-content/uploads/2010/02/PIIS096098220902065X.gr2_.lrg_.jpg

Comment author: mattnewport 03 May 2010 06:15:01PM 1 point [-]

If genetic differences in intelligence could not be relevant to reproductive success within a single generation it is difficult to see how human intelligence could have evolved.

Comment author: Tyrrell_McAllister 03 May 2010 01:14:52AM *  8 points [-]

I don't know if the email you quoted (thanks for that, BTW, it's a treat to have access to a primary source without needing to chase it down) is racist, but it does sound very ignorant to me. It makes unwarranted inferential leaps, e.g. from "skin and hair color are definitely genetic" to "some part of intelligence is genetic", omitting the very different length of developmental chains leading from genes to pigmentation on the one hand, and intelligence on the other.

Let's be careful here. The letter does not assert baldly that "some part of intelligence is genetic". Rather, the letter asserts that some evidence "suggests to me that some part of intelligence is genetic".

Furthermore, that particular inferential leap does not begin with the observation that "skin and hair color are definitely genetic". Rather, the inferential leap begins with the claim that "Women tend to perform less well in math due at least in part to prenatal levels of testosterone, which also account for variations in mathematics performance within genders." Therefore, at least with regards to that particular inference, it is not fair to criticize the author for "omitting the very different length of developmental chains leading from genes to pigmentation on the one hand, and intelligence on the other."

[ETA: Of course, the inference that the author did make is itself open to criticism, just not the criticism that you made.]

I say all this as someone who considers Occam to be pretty firmly on the side of nongenetic explanations for the racial IQ gaps. But no progress in these kinds of discussions is possible without assiduous effort to avoid misrepresenting the other side's reasoning.

Comment author: Nick_Tarleton 02 May 2010 11:55:52PM *  12 points [-]

PS Also why does the Dean equate inteligence with genetic superiority and imlicitly even worth as a person?

See Michael Vassar's discussion of this phenomenon. Also, I think that people discussing statements they see as dangerous often implicitly (and unconsciously) adopt the frames that make those statements dangerous, which they (correctly) believe many people unreflectively hold and can't easily be talked out of, and treat those frames as simple reality, in order to more simply and credibly call the statement and the person who made it dangerous and Bad.

Comment author: cwillu 01 May 2010 09:21:31PM *  9 points [-]

Has anybody considered starting a folding@home team for lesswrong? Seems like it would be a fairly cheap way of increasing our visibility.

<30 seconds later>

After a brief 10 word discussion on #lesswrong, I've made a lesswrong team :p

Our team number is 186453; enter this into the folding@home client, and your completed work units will be credited.

Comment author: Jack 01 May 2010 09:35:53PM 1 point [-]

What is this?

Comment author: Rain 01 May 2010 09:41:21PM *  2 points [-]

Donating money to scientific organizations (in the form of a larger power bill). You run your CPU (otherwise idle) to crunch difficult, highly parallel problems like protein folding.

Comment author: cwillu 01 May 2010 10:16:47PM 0 points [-]

Granted that in many cases, it's donating money that you were otherwise going to burn.

Comment author: mattnewport 01 May 2010 10:23:57PM 2 points [-]

Granted that in many cases, it's donating money that you were otherwise going to burn.

No, modern CPUs use considerably less power when they are idle. A computer running folding at home will be drawing more power than if it were not.

Comment author: cwillu 01 May 2010 10:43:48PM *  0 points [-]

Many != all.

My desktop is old enough that it uses very little more power at full capacity than it does at idle.

Additionally, you can configure (may be the default, not sure) the client to not increase the clock rate.

Comment author: Rain 01 May 2010 11:02:00PM *  0 points [-]

Idle could also mean 'off', which would be significant power savings even (especially?) for older CPUs.

Comment author: cwillu 02 May 2010 01:18:40AM *  1 point [-]

One who refers to their powered-off computer as 'idle' might find themselves missing an arm.

Comment author: Rain 03 May 2010 02:15:36PM *  1 point [-]

Except I'm talking about opportunity cost rather than redefining the word. You can turn off a machine you aren't using, a machine that's idle.

Comment author: mattnewport 01 May 2010 11:28:03PM *  1 point [-]

Many != all.

It is also not equal to 'some'. The vast majority of computers today will use more power when running folding at home than they would if they were not running folding at home. There may be some specific cases where this is not true but it will generally be true.

My desktop is old enough that it uses very little more power at full capacity than it does at idle.

You've measured that have you? Here's an example of some actual measurements for a range of current processors' power draw at idle and under load. It's not a vast difference but it is real and ranges from about 30W / 40% increase in total system power draw to around 100W / 100% increase.

Additionally, you can configure (may be the default, not sure) the client to not increase the clock rate.

I couldn't find mention of any such setting on their site. Do you have a link to an explanation of this setting?

Comment author: cwillu 02 May 2010 01:39:18AM *  1 point [-]

On further consideration, my complaint wasn't my real/best argument, consider this a redirect to rwallace's response above :p

That said, I personally don't take 'many' as meaning 'most', but more in the sense of "a significant fraction", which may be as little as 1/5 and as much as 4/5. I'd be somewhat surprised if the number of old machines (5+ years old) in use wasn't in that range.

re: scaling, the Ubuntu folding team's wiki describes the approach.

Comment author: rwallace 01 May 2010 11:56:47PM 6 points [-]

But you've already paid for the hardware, you've already paid for the power to run the CPU at baseload, and the video card, and the hard disk, and all the other components; if you turn the machine off overnight, you're paying for wear and tear on the hardware turning it off and on every day, and paying for the time you spend booting up, reloading programs and reestablishing your context before you can get back to work.

In other words, the small amount of money spent on the extra electricity enables the useful application of a much larger chunk of resources.

That means if you run Folding@home, your donation is effectively being matched not just one for one but severalfold, and not by another philanthropist, but by the universe.

Comment author: mattnewport 01 May 2010 11:59:26PM 0 points [-]

I'm not saying it isn't a net gain, it may well be according to your own personal weighing of the factors. I'm just saying it is not free. Nothing is.

Comment author: Jack 02 May 2010 12:00:41AM *  0 points [-]

Assuming whatever gets learned through folding@home has applications they should offer users partial ownership of the intellectual property.

Comment author: rwallace 02 May 2010 03:19:07AM 0 points [-]

It's scientific research, the results are freely published.

Comment author: CarlShulman 02 May 2010 03:40:09AM *  1 point [-]

A severalfold match isn't very impressive if the underlying activity is at least several orders of magnitude less efficient than alternatives, which seems likely here.

Comment author: rwallace 02 May 2010 10:43:46AM 0 points [-]

It seems highly unlikely to me. Biomedical research in general and protein folding in particular are extremely high leverage areas. I think you will be very hard put to it to find a way to spend resources even a single order of magnitude more efficiently (let alone make a case that the budget of any of us here is already being spent more efficiently, either on average or at the margin).

Comment author: CarlShulman 02 May 2010 08:44:06PM 6 points [-]
  1. Moore's Law means that the cost of computation is falling exponentially. Even if one thought that providing computing power was the best way to spend money (on electricity) it would likely be better to save the money spent on the electric power and buy more computing power later, unless the computation is much much more useful now.

  2. Biomedical research already gets an outsized portion of all R&D, with diminishing returns. The NIH budget is over $30 billion.

  3. Slightly accelerating protein folding research doesn't benefit very much from astronomical waste considerations compared to improving the security of future progress with existential risk reduction.

Comment author: rwallace 02 May 2010 09:03:38PM 0 points [-]
  1. In practice, it is worth doing the computation now -- we can easily establish this by looking at the past, and noting that the people who performed large computations then, would not have been better off waiting until now.

  2. $30 billion is a lot of money compared to what you and I have in our pockets. It's dirt cheap compared to the trillions being spent on unsuccessful attempts to treat people who are dying for lack of better biotechnology.

  3. By far the most important way to reduce real life existential risks is speed.

  4. Even if you could find a more cost effective research area to finance, it is highly unlikely that you are actually spending every penny you can spare in that way. The value of spending resources on X, needs to be compared to the other ways you are actually spending those resources, not to the other ways you hypothetically could be spending them.

Comment author: Vladimir_Golovin 03 May 2010 06:46:29AM 0 points [-]

and the video card

They have high-performance GPU clients that are a lot faster than CPU-only ones.

Comment author: Rain 03 May 2010 02:17:32PM 4 points [-]

if you turn the machine off overnight, you're paying for wear and tear on the hardware turning it off and on every day, and paying for the time you spend booting up, reloading programs and reestablishing your context before you can get back to work.

I've seen numerous discussions about whether it's better / more economical to turn off your machine or to leave it running all the time, and I have never seen a satisfactory conclusion based on solid evidence.

Comment author: RobinZ 03 May 2010 02:35:05PM 0 points [-]

That's because it depends on the design. On the lifetime point, for example: if the machine tends to fail based on time spent running (solder creep, perhaps), leaving it running more often will reduce the life, but if the machine tends to fail based on power cycling (low-cycle fatigue, perhaps), turning it on and off more often will reduce the life.

Given that I've dropped my MacBook from a height of four feet onto a concrete slab, I figure the difference is roundoff error as far as I am concerned.

Comment author: MichaelGR 03 May 2010 06:16:18PM *  2 points [-]

I wrote a quick introduction to distributed computing a while ago:

http://michaelgr.com/distributed-computing/

My favorite project (the one which I think could benefit humanity the most) is Rosetta@home.

Comment author: Jack 01 May 2010 10:03:41PM 1 point [-]

So I think I have it working but... theres nothing to tell me if my CPU is actually doing any work. It says it's running but... is there supposed to be something else? I used to do SETI@home back in the day and they had some nice feedback that made you feel like you were actually doing something (of course, you weren't because your computer was looking for non-existent signals, but still).

Comment author: cwillu 01 May 2010 10:28:25PM *  0 points [-]

I use the origami client manager thingie; it handles deploying the folding client, and gives a nice progress meter. The 'normal' clients should have similar information available (I'd expect that origami is just polling the clients themselves).

Comment author: zero_call 02 May 2010 12:49:27AM *  1 point [-]

...of course, you weren't because your computer was looking for non-existent signals...

The existence of ET signals is an open qustion. SETI is a fully legitimate organization ran according to a well thought out plan for collecting data to help answer this question.

Comment author: Jack 02 May 2010 01:00:19AM 1 point [-]

I think the probability they ever find what they're looking for is extraordinarily low. But I don't have anything against the organization.

Comment author: zero_call 02 May 2010 01:14:29AM *  1 point [-]

Right on, but just so you know, other (highly informed) people think that we may find a signal by 2027, so there you go. For an excellent short article (explaining this prediction), see here.

Comment author: Jack 02 May 2010 01:55:41AM 0 points [-]

I don't think the author deals with the Fermi paradox very well, and the paradox is basically my reason for assigning a low probability to SETI finding something.

Comment author: zero_call 02 May 2010 02:08:13AM 0 points [-]

The Fermi paradox also struck me as a big issue when I first looked into these ideas, but now it doesn't bother me so much. Maybe this should be the subject of another open thread.

Comment author: nhamann 02 May 2010 12:55:07AM 2 points [-]

Does anyone know the relative merits of folding@home and rosetta@home, which I currently run? I don't understand enough of the science involved to compare them, yet I would like to contribute to the project which is likely to be more important. I found this page, which explains the differences between the projects (and has some information about other distributed computing projects), but I'm still not sure what to think about which project I should prefer to run.

Comment author: MichaelGR 03 May 2010 06:14:22PM *  1 point [-]

Personally I run Rosetta@home because, based on my research, it could be more useful to designing new proteins and computationally predicting the function of proteins. Folding seems to be more about understanding how they proteins fold, which can help with some diseases, but isn't nearly the game changing that in silico design and shape prediction would be.

I also think that the SENS Foundation (Aubrey de Grey & co) have some ties to Rosetta, and might use it in the future to design some proteins.

I'm a member of the Lifeboat Foundation team: http://lifeboat.com/ex/rosetta.home

But we could also create a Less Wrong team if there's enough interest.

Comment author: CronoDAS 01 May 2010 09:54:20PM 3 points [-]
Comment author: [deleted] 01 May 2010 11:26:50PM *  0 points [-]

Its too late for me. Its the Second of May over here. :' (

Comment author: [deleted] 02 May 2010 12:55:13AM 9 points [-]

Today, while I was attending an honors banquet, a girl in my class and her boyfriend were arguing over whether or not black was a color. When she had somewhat convinced him that it wasn't (I say somewhat because the argument was more-or-less ending and he didn't have a rebuttal), I asked "Wait, are you saying I can't paint with black paint?" She conceded that, of course black paint can be used to paint with, but that black wasn't technically a color. At which point I explained that we were likely using two different definitions of color, and that we should explain what we mean. I gave two definitions: 1] The various shade which a human eye was seeing and the brain was processing. 2] The specific wavelength of light that a human eye can pick up. The boyfriend and I were using definition 1, where as she was using definition 2. And with that cleared up, the debate ended.

Note: Both definitions aren't word for word, but somewhat close. I was simply making the distinction between the wavelength itself and the process of seeing something and placing it in a certain color category.

Comment author: zero_call 02 May 2010 03:56:34AM 0 points [-]

Huzzah! That's all too common a problem... sometimes the main problem...

Comment author: cousin_it 02 May 2010 01:42:43PM *  7 points [-]

One could argue that definition 2 is Just Wrong, because it implies that purple isn't a color (purple doesn't have a wavelength, it is non-spectral).

Comment author: Liron 03 May 2010 03:18:18AM 2 points [-]

This will replace Eliezer's tree falling in a forest sound as my go-to example of how an algorithm feels on the inside about wrong questions.

Comment author: Jack 02 May 2010 07:36:43AM 0 points [-]

"Talking with the Planets"(1901) by Nikola Tesla

Comment author: zero_call 02 May 2010 12:24:11PM *  0 points [-]

Here is a little story I've written. "When the sun goes down."

In the dawning light a boy steps out onto the shadowed plane. All around him lie the pits of entry into the other realms. When he reaches the edge of these pits he skirts his way around slowly and carefully until reaching the open space between them. He continues walking in the shadowed plane until the sun has waned completely, and all is dark. And then the shadowed plane becomes the night plane.

In the night the pits cannot be seen, and the boy can no longer walk his way among them to avoid the entrance into the other realms. He is faced with the decision of continuing the journey in danger, or stopping, and setting up a camp in one place. He knows he has a small safe area around him which would be free of obstacles and the dangers of the pits. But it is a place of stagnation in the night plane, and it lies stagnant as a swampy pool until the sun has come again. And it can be hard to guess the rising of the sun in the night plane, when there is nothing to judge the changing of the sky.

Experience has shown him that the pits could be both good and bad, but frequently bad. The pits had stress and nervousness and fear. There could be great rewards lieing in the pits, waiting to be seized, but these rewards required the unknown descent. The descent into the other realms was like a deep pulsing fear inside the boy's mind. He sits down on the rough ground of the night plane, looking to the sky for distraction. He looks to escape the decision to enter the pits or to continue walking along the plane. Either decision holds fear, the commitment, the isolation of the pits.

"I seek distraction from my journey," says the boy, and in response a sprite appears on the horizon, a point of light. This light is dimmer than the sun, far dimmer. The light holds his gaze. From the direction of the light there comes a voice.

Sprite: Welcome to the other space.

Boy: Show me something new.

Sprite: What would you like to see?

Boy: I will see what lies in the pits. But I will not enter.

Sprite: I can show you the pits, but you cannot touch them, you can only look.

And the boy nods his head and the Sprite takes his hand, and his mind leaves his body sitting on the night plane. In the other space, his mind runs from pit to pit, leaning just over the edge, without fear of entry. The sprite stands beside him all the while, a comforting presence, an ethereal presence which takes no form. When it takes his hand, he feels the hand of air. When he sees its light, it is the light of a distant star, never close enough to be revealed for its source.

In the pits he sees many things. He sees all of his curiosities, and branching curiosities, points of humor, interest, and instinctual desire. Each view leads him to the next as though the night plane lay below him at a great distance, where the solid spaces between the pits could not be seen. In the other space, the boy could look down on the night plane, see into the pits, and move from one to the next with no more effort than the slight shift of his gaze.

The sun approaches rapidly now and the boy begins to feel a sense of weariness. He feels confused, lonely, fatigued by his removal from the solid earth of the shadowed plane. He calls out to the emptiness once more.

Boy: Take me back to the plane. The sun approaches.

Sprite: Let us go then.

And with his simple command he executes his path. Back on the plane the sun begins to rise -- the boy, tired, exhausted from his journeys in the other space. He feels safe, having avoided the treacheries of the pits for one more night. In the daytime he can navigate safely once again. In the back of his mind, he knows the pits will come once more, and the fear remains inside his mind, tucked away into a back corner. The fatigue of the experience has its own fear, but this fear he cannot understand, and so he does not feel it.

The sun rises, the boy sleeps to recover, wakes again, and looks out among the plane. There is only a short time now before the sun will dawn upon the shadowed plane. In the dim light the boy begins again.

Comment author: Lightwave 02 May 2010 11:43:33PM 2 points [-]

Here's my question to everyone:

What do you think are the benefits of reading fiction (all kinds, not just science fiction) apart from the entertainment value? Whatever you're learning about the real world from fiction, wouldn't it be more effective to read a textbook instead or something? Is fiction mostly about entertainment rather than learning and improvement? Any thoughts?

Comment author: Jack 02 May 2010 11:53:32PM 2 points [-]

Fiction is good for teasing out possibilities and counterfactuals, experimenting with different attitudes toward the world (as opposed to learning facts about the world), and learning to be cool.

Comment author: NancyLebovitz 03 May 2010 12:28:36AM *  3 points [-]

On the other hand (and I speak as a person who really likes fiction), it's possible that you learn more about the human range by reading letters and diaries-- whatever is true in fiction may be distorted to make good stories.

Comment author: Morendil 03 May 2010 12:04:42AM 5 points [-]

A possible benefit of fiction is that it leads you to experience emotions vicariously that it would be much more expensive to experience for real, yet the vicarious experience is realistic enough that it serves as useful practice, a way of "taming" the emotions. Textbooks don't convey emotions.

I seem to recall this argument from a review of Cloverfield, or possibly the director's commentary. Broadcast images such as from the 9/11 aftermath generated lots of anxiety, and seeing similar images - the amateurish, jerky camcorder type - reframed in a fictional setting which is "obviously" over the top helps you, the audience, come to terms with the reality.

Comment author: Nisan 03 May 2010 06:50:11AM *  4 points [-]

It was not until I read Three Worlds Collide that I began to embrace moral consequentialism. I would not have found an essay or real-life case study nearly as convincing.

ETA: I didn't change my mind just because I liked the story. The story made me realize that in a particular situation, I would be a moral consequentialist.

Comment author: Academian 03 May 2010 07:09:14AM *  6 points [-]

My take on works of fiction, especially written fiction, is that they're thought experiments for your emotional intelligence. The best ones are the ones written for that purpose, since I think they tend to better optimize the net value of entertainment and personal growth.

Morality in particular usually stems from some sort of emotional intelligence, like empathy, so it makes sense to me that written fiction could help especially with that.

Comment author: [deleted] 03 May 2010 02:32:11PM 5 points [-]

We are wired for individual rather than general insights. Stories are much more effective at communicating certain things than treatises are. I would never have believed, in theory, that a man who enjoyed killing could be worthy of respect; only a story could convince me. To use Robin Hanson's terminology, narrative can bring near mode and far mode together.

Why not true stories? I think there you get into Aristotle and why versimilitude can be more effective than mere reality. True stories are good too, but life is disorderly and not necessarily narrative. It's a truism of writing workshops and creative writing classes that whenever you see a particularly unrealistic event in a story, the author will protest "But that really happened!" It doesn't matter; it's still unrealistic. Narrative is, I think, a particular kind of brain function that humans are good at, and it's a painting, not a photograph. To tap into our ability to understand each other through narrative, we usually need to fictionalize the world, apply some masks and filters.

Comment author: eugman 03 May 2010 02:59:30AM 2 points [-]

Has anyone read The Integral Trees by Larry Niven? Something I always wonder about people supporting cryonics is why do they assume that the future will be a good place to live in? Why do they assume they will have any rights? Or do they figure that if they are revived, FAI has most likely come to pass?

Comment author: ata 03 May 2010 03:17:42AM 1 point [-]

Or do they figure that if they are revived, FAI has most likely come to pass?

Can't speak for any other cryonics advocates, but I find that to be likely. I see AI either destroying or saving the world once it's invented, if we haven't destroyed ourselves some other way first, and one of those could easily happen before the world has a chance to turn dystopian. But in any case, if you wake up and find yourself in a world that you couldn't possibly bear to live in, you can just kill yourself and be no worse off than if you hadn't tried cryonics in the first place.

Comment author: humpolec 03 May 2010 12:35:30PM 0 points [-]

Unless it's unFriendly AI that revives you and tortures you forever.

Comment author: NancyLebovitz 03 May 2010 12:49:11PM 1 point [-]

Actually, it's quite possible to deny physical means of suicide to prisoners, and sufficiently good longevity tech could make torture for a very long time possible.

I think something like that (say, for actions which are not currently considered to be crimes) to be possible, considering the observable cruelty of some fraction of the human race, but not very likely-- on the other hand, I don't know how to begin to quantify how unlikely it is.

Comment author: gregconen 03 May 2010 02:10:05PM 6 points [-]

Strongly unFriendly AI (the kind that tortures you eternally, rather than kills you and uses your matter to make paperclips) would be about as difficult to create as Friendly AI. And since few people would try to create one, I don't think it's a likely future.

Comment author: NancyLebovitz 03 May 2010 06:03:35AM 3 points [-]

Science fiction has a bias towards things going wrong.

In the particular case of cryonics, if there's a dystopian future where the majority of people have few or no rights, it's a disaster all around, but as ata says, you can presumably commit suicide. There's a chance that even that will be unfeasible-- for example if brains are used, while conscious, for their processing power. This doesn't seem likely, but I don't know how to evaluate it in detail.

The other case-- people in general have rights, but thawed people, or thawed people from before a certain point in time, do not-- requires that thawed people do not have a constituency. This doesn't seem terribly likely, though as I recall, Niven has it that it takes a very long time for thawing to be developed.

Normally, I would expect for there to be commercial and legal pressures for thawed people to be treated decently. (I've never seen an sf story in which thawed people are a political football, but it's an interesting premise.)

I think the trend is towards better futures (including richer, with less reason to enslave people), but there's no guarantee. I think it's much more likely that frozen people won't be revived than that they'll be revived into a bad situation.

Comment author: ata 03 May 2010 06:26:15AM 4 points [-]

Science fiction has a bias towards things going wrong.

All fiction has a bias towards things going wrong. Need some kind of conflict.

(Reality also has a bias towards things going wrong, but if Fun Theory is correct, then unlike with fiction, we can change that condition without reducing the demand for reality.)

Comment author: NancyLebovitz 03 May 2010 06:43:53AM 3 points [-]

Science fiction has a stronger bias towards things going wrong on a grand scale than most fiction does.

Comment author: JoshuaZ 03 May 2010 06:07:32AM 3 points [-]

A dystopian society is unlikely to thaw out and revive people in cryostasis. Cryostasis revival makes sense for societies that are benevolent and have a lot of free resources. Also, be careful not to try to generalize from fictional examples. They are not evidence. That's all the more the case here because science fiction is in general a highly reactionary genre that even as it uses advance technology either warns about the perils or uses it as an excuse to hearken back to a more romantic era. For example look how many science fiction stories and universes have feudal systems of government.

Comment author: eugman 03 May 2010 02:13:24PM 4 points [-]

Now that's a reasonable argument: benevolent, resource rich societies are more likely to thaw people. Thanks.

And yes, that's true, science fiction does often look at what could go really wrong.

Comment author: Matt_Stein 03 May 2010 04:32:54AM 5 points [-]

So, I'm somewhat new to this whole rationality/Bayesianism/(nice label that would describe what we do here on LessWrong). Are there any podcasts or good audiobooks that you'd recommend on the subjects of LessWrong? I have a large amount of time at work that I can listen to audio, but I'm not able to read during this time. Does anyone have any suggestions for essential listening/reading on subjects similar to the ones covered here?

Comment author: khafra 03 May 2010 06:02:01AM *  19 points [-]

Ask A Rationalist--choosing a cryonics provider:

I'm sold on the concept. We live in a world beyond the reach of god; if I want to experience anything beyond my allotted threescore and ten, I need a friendly singularity before my metabolic processes cease; or information-theoretic preservation from that cessation onward.

But when one gets down to brass tacks, the situation becomes murkier. Alcor whole body suspension is nowhere near as cheap as numbers that get thrown around in discussions on cryonics--if you want to be prepared for senescence as well as accidents, a 20 year payoff on whole life insurance and Alcor dues runs near $200/month; painful but not impossible for me.

The other primary option, Cryonics Institute, is 1/5th the price; but the future availability--even at additional cost--of timely suspension is called into question by their own site.

Alcor shares case reports, but no numbers for average time between death and deep freeze, which seems to stymie any easy comparison on effectiveness. I have little experience reading balance sheets, but both companies seem reasonably stable. What's a prospective immortal on a budget to do?

Comment author: ata 03 May 2010 06:18:43AM 0 points [-]

I second this query. I've been meaning to post something similar.

Comment author: Jack 03 May 2010 06:27:59AM 3 points [-]

Alcor whole body suspension

Why not save some money and lose what's below the neck?

Comment author: khafra 03 May 2010 11:38:48AM 0 points [-]

That saves about half the life insurance cost while leaving the Alcor dues the same, dropping the cost from ~$200/month to ~$140/month. This doesn't make it a clearly preferable option to me.

Comment author: [deleted] 03 May 2010 03:44:22PM *  2 points [-]

If I recall correctly preservation of the brain is supposed to be easier and on average of better quality with the decapitated option (I know I'm using the uncool term) than the whole body option.

Comment author: Liron 03 May 2010 07:55:41AM 1 point [-]

I recently heard a physics lecture claim that the luminiferous aether didn't really get kicked out of physics. We still have a mathematical structure, which we just call "the vacuum", through which electromagnetic waves propagate. So all we ever did was kill the aether's velocity-structure, right?

Comment author: ata 03 May 2010 08:14:20AM *  5 points [-]

That reminds me of this discussion.

Of course if you define "luminiferous aether" as generally as "whatever mechanism results in the propagation of electromagnetic waves", then it exists, because electromagnetic waves do propagate. But when it was under serious scientific consideration, the luminiferous aether theory made testable predictions, and they failed. Just saying "they're different concepts" is easier than saying "it's the same basic concept except it has a different name and the structure of the theory is totally different".

I could sympathize with trying to revive the name "luminiferous aether" (or even better, "luminiferous æther"), though. It's a pretty awesome name. (I go by "Luminiferous Æther Bunny" on a few other forums.)

Comment author: Liron 03 May 2010 08:36:52AM 0 points [-]

Nice link. It would be cool to see a similar discussion for all the classic rejected hypotheses.

Comment author: SilasBarta 03 May 2010 03:36:17PM 6 points [-]

I recalled the strangest thing an AI could tell you thread, and I came up with another one in a dream. Tell me how plausible you think this one is:

Claim: "Many intelligent mammals (e.g. dogs, cats, elephants, cetaceans, and apes) act just as intelligently as feral humans, and would be capable of human-level intelligence with the right enculturation."

That is, if we did to pet mammals something analogous to what we do to feral humans when discovered, we could assimilate them; their deficiencies are the result of a) not knowing what assimilation regimen is necessary for pets/zoo mammals; and b) mammals in the wild being currently at a lower level of cultural development, but which humans at one time passed through.

Thoughts?

Comment author: RobinZ 03 May 2010 04:20:27PM 0 points [-]

Next step: "Okay, what should I do to test this?"

Comment author: SilasBarta 03 May 2010 05:25:22PM 1 point [-]

Find some method of communication that the mammal can use, and raise it in a society of children that use that method of communication. See if its behavior tracks that of the children in terms of intelligence.

I believe such an experiment has already been performed, involving deaf children and using sign language as the communication method, and some kind of ape as the mammal. It supposedly actually adapted very comfortably, behaving just as the children (except that they taught it to ask for drugs), but they had to cut off the experiment on the grounds that, after another year of growth, the ape would be too strong and therefore too dangerous to risk leaving in the presence of children.

I can't find a cite at the moment, but I remember a friend telling me about this and it checked out in an online search.

Comment author: Jack 03 May 2010 05:52:22PM *  0 points [-]

What they need to do is include like 5 or 6 apes with the children and then when they're removed they can continue socializing with each other.

The problem is coming up with methods of communication. Aside from apes and sign language I can't think of any...

Comment author: NancyLebovitz 03 May 2010 06:26:17PM 2 points [-]

African gray parrots and spoken language.

Comment author: JoshuaZ 03 May 2010 06:46:57PM 8 points [-]

Yes, and there's been a lot of work with African Greys already. Irene Pepperberg and her lab have done most of the really pioneering work. They've shown that Greys can recognize colors, small numbers and in some cases produce very large vocabs. There's also evidence that Grey's sometimes overcorrect. That is, they apply complete grammatical rules to conjugate/decline words even when the words are irregular. This happens with human children as well. Thus for example, human children will frequently say "runned" when they mean "ran" or "mouses" when they mean "mice" and many similar examples. This is strong evidence that they are internalizing general rules rather than simply repeating words they've heard. Since Greys do the same thing, we can conclude that parrots aren't just parroting.

Comment author: Jack 03 May 2010 06:50:03PM 1 point [-]

This is strong evidence that they are internalizing general rules rather than simply repeating words they've heard.

Yes, it is! I hadn't heard that before. Is there a journal article somewhere?

Comment author: JoshuaZ 03 May 2010 07:03:04PM *  2 points [-]

I'm not aware of any journal articles for overcorrection and a quick Google search doesn't turn any up. I'll go bug my ornithology friends. In the meantime, here's a BBC article that discusses the matter: http://web.archive.org/web/20060519061120/http://news.bbc.co.uk/2/hi/science/nature/3430481.stm . They give the example of N'kisis using "flied" for the past tense of "fly" rather than "flew."

Edit: Fixed link. Edit: Link's accuracy is questionable. See Mass Driver's remarks below.

Comment author: NancyLebovitz 03 May 2010 06:23:55PM 1 point [-]

Human languages (including sign) are adapted for human beings. While there's some flexibility, I wouldn't expect animals using human language to be at their best.

Comment author: NancyLebovitz 03 May 2010 04:21:29PM 0 points [-]

Sounds plausible to me. I suspect people aren't able to develop enculturation for animals-- the sensory systems and communication methods are too different.

I also believe people have been unintentionally selecting wild animals for intelligence.

Comment author: gwern 03 May 2010 04:55:05PM 3 points [-]

Some people have curious ideas about what LW is; from http://www.fanfiction.net/r/5782108/18/1/ :

"HO-ley **! That was awesome! You might also be interested to know that my brother, my father and I all had a wonderful evening reading that wikipedia blog on rationality that you are named for. Thank you for this, most dearly and truly."

Comment author: thomblake 03 May 2010 05:07:39PM 2 points [-]

I'm not sure I even know how to parse "wikipedia blog on rationality". But at least in some sense, we apparently are Wikipedia. Congrats.