Open Thread: July 2010

6 Post author: komponisto 01 July 2010 09:20PM

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Part 2

Comments (653)

Comment author: PeerInfinity 29 July 2010 04:58:47AM 2 points [-]

an interesting site I stumbled across recently: http://youarenotsosmart.com/

They talk about some of the same biases we talk about here.

Comment author: Cyan 29 July 2010 03:47:26PM 0 points [-]

In fact, the post of July 14 on the illusion of transparency quotes EY's post on the same subject.

Comment author: simplicio 28 July 2010 06:03:39AM *  0 points [-]

I've been listening to a podcast (Skeptically Speaking) talking with a fellow named Sherman K Stein, author of Survival Guide for Outsiders. I haven't read the book, but it seems that the author has a lot of good points about how much weight to give to expert opinions.

EDIT: Having finished listening, I revise my opinion down. It's still probably worth reading, but wait for it to get to the library.

Comment author: NancyLebovitz 26 July 2010 02:58:07AM 5 points [-]

A few years after I became an assistant professor, I realized the key thing a scientist needs is an excuse. Not a prediction. Not a theory. Not a concept. Not a hunch. Not a method. Just an excuse — an excuse to do something, which in my case meant an excuse to do a rat experiment. If you do something, you are likely to learn something, even if your reason for action was silly. The alchemists wanted gold so they did something. Fine. Gold was their excuse. Their activities produced useful knowledge, even though those activities were motivated by beliefs we now think silly. I’d like to think none of my self-experimentation was based on silly ideas but, silly or not, it often paid off in unexpected ways. At one point I tested the idea that standing more would cause weight loss. Even as I was doing it I thought the premise highly unlikely. Yet this led me to discover that standing a lot improved my sleep.

Seth Roberts

I'm not sure he's right about this, but I'm not sure he's wrong, either. What do you think?

Comment author: RobinZ 26 July 2010 02:42:02PM 1 point [-]

It makes me think of Richard Hamming talking about having "an attack".

Comment author: beriukay 17 July 2010 12:57:05PM *  3 points [-]

I know this thread is a bit bloated already without me adding to the din, but I was hoping to get some assistance on page 11 of Pearl's Causality (I'm reading 2nd edition).

I've been following along and trying to work out the examples, and I'm hitting a road block when it comes to deriving the property of Decomposition using the given definition (X || Y | Z) iff P( x | y,z ) = P( x | z ), and the basic axioms of probability theory. Part of my problem comes because I haven't been able to meaningfully define the 'YW' in (X || YW | Z), and how that translates into P( ). My best guess was that it is a union operation, but then if they aren't disjoint we wouldn't be using the axioms defined earlier in the book. I doubt someone as smart as Pearl would be sloppy in that way, so it has to be something I am overlooking.

I've been googling variations of the terms on the page, as well as trying to get derivations from Dawid, Spohn, and all the other sources in the footnote, but they all pretty much say the same thing, which is slightly unhelpful. Help would be appreciated.

Edit: It appears I failed at approximating the symbol used in the book. Hopefully that isn't distracting. It should look like the symbol used for orthogonality/perpendicularity, except with a double bar in the vertical.

Comment author: rhollerith_dot_com 17 July 2010 09:46:56PM *  0 points [-]

I know this thread is a bit bloated already without me adding to the din

Do not worry about that. Pearl's Causality is part of the canon of this place.

Comment author: SilasBarta 17 July 2010 03:53:18PM *  0 points [-]

You are right that YW means "Y and W". (The fact that they might be disjoint doesn't matter. It looks like the property you are referring to follows from the definition of conditional independence, but I'm not good at these kinds of proofs.)

And welcome to LW, don't feel bad about adding a question to the open thread.

Comment author: rhollerith_dot_com 17 July 2010 09:28:29PM *  0 points [-]

I haven't been able to meaningfully define the 'YW' in (X || YW | Z), and how that translates into P( ). My best guess was that it is a union operation, but then if they aren't disjoint . . .

You are right that YW means "Y and W" [says Silas].

You're probably right, Silas, that "YW" means "Y and W" (or "y and w" or what have you), but you confuse the matter by stating falsely that the original poster (beriukay) was right in his guess: if it was a union operation, Pearl would write it "Y cup W" or "y or w" or some such.

I do not have the book in front of me, beriukay, so that is the only guidance I can give you given what you have written so far.

Added. I now recall the page you refer to: there are about a dozen "laws" having to do with conditional independence. Now that I remember, I am almost certain that "YW" means "Y intersection W".

Comment author: beriukay 18 July 2010 11:14:32AM 0 points [-]

First, thanks for taking an interest in my question. I just realized that instead of typing my question into a different substrate, google likely had a scan of the page in question. I was correct. And unless I am mistaken, when he introduces his probability axioms he explicitly stated that he would use a comma to indicate intersection.

Comment author: rhollerith_dot_com 18 July 2010 02:13:37PM *  0 points [-]

I am afraid I cannot agree with you.

Have you succeeded in your stated intention of "deriving the property of Decomposition using the given definition (X || Y | Z) iff P( x | y,z ) = P( x | z ), and the basic axioms of probability theory"?

If you wish to continue discussing this problem with me, I humbly suggest that the best way forward is for you to show me your proof of that. And we might take the discussion to email if you like.

It is great that you are studying Pearl.

Comment author: SilasBarta 17 July 2010 09:45:42PM 1 point [-]

Sorry, I'm bad about that terminology. Thanks for the correction.

Comment author: Taure 14 July 2010 10:33:35PM *  1 point [-]

Is self-ignorance a prerequisite of human-like sentience?

I present here some ideas I've been considering recently with regards to philosophy of mind, but I suppose the answer to this question would have significant implications for AI research.

Clearly, our instinctive perception of our own sentience/consciousness is one which is inaccurate and mostly ignorant: we do not have knowledge or sensation of the physical processes occurring in our brains which give rise to our sense of self.

Yet I take it as true that our brains - like everything else - are purely physical. No mysticism here, thank you very much. If they are physical, then everything that occurs within is causally deterministic. I avoid here any implications regarding free will (a topic I regard as mostly nonsense anyway). I simply point out that our brain processes will follow a causal narrative thus: input leads to brain state A leads to brain state B which leads to brain state C, and so on. These processes are entirely physical, and therefore, theoretically (not practically - yet), entirely predictable.

Now, ask yourself this question: what would our self-perception be like, if it was entirely accurate to the physical reality? If there was no barrier of ignorance between our consciousness and the inner workings of our brains?

With every idea, though, emotion, plan, memory and action we had, we would be aware of the brainwave that accompanied it - the specific pattern of neuronal firings, and how they built up to create semantically meaningful information. Further, we'd see how this brain state led to the following brain state, and so on. We would perceive ourselves as purely mechanical.

In addition, as our brain is not a single entity, but a massive network of neurons, collected into different systems (or modules), working together but having separate functions, we would not think of our mental processes as unified - at least nowhere near as much as we do now.We would no longer attribute our thoughts and mental life to an "I", but to the totality of mechanical processes that - when we were ignorant - built up to create a unified sense of "I".

I would tentatively suggest that such a sense of self is incompatible with our current sense of self. That how we act and behave and think, how we see ourselves and others, is intrinsically tied to the way we perceive ourselves as non-mechanical, possessing a mystical will - an I - which goes where it chooses (of course academically you may recognise that you're a biological machine, but instinctually we all behave as if we weren't). In short, I would suggest that our ignorance of our neural processes is necessary for the perception of ourselves as autonomous sentient individuals.

The implications of this, were it true, are clear. It would be impossible to create an AI which was both able to perceive and alter its own programming, while maintaining a human-like sentience. That's not to say that such an AI would not be sentient - just that it would be sentient in a very different way to how we are.

Secondly, we would possibly not even be able to recognise this other-sentience, such was the difference. For every decision or proclamation the AI made, we would simply see the mechanical programming at work, and say "It's not intelligent like we are, it's just following mechanical principles". (Think, for example, of Searle's Chinese Room, which I take only shows that if we can fully comprehend every stage of an information manipulation process, most people will intuitively think it to be not sentient). We would think our AI project unfinished, and keep trying to add that "final spark of life", unaware that we had completed the project already.

Comment author: steven0461 14 July 2010 10:52:55PM 0 points [-]

I don't think there is really such a thing as introverted and extroverted people at all. People are encouraged to think of these things as part of their "essential character" (TM) - or even their biology.

Here's some evidence the other way -- paywalled, but the gist is on the first page.

Comment author: Taure 14 July 2010 11:11:20PM 0 points [-]

Um, thanks, but I think wrong thread.

Comment author: steven0461 14 July 2010 11:13:51PM 0 points [-]

Oops, you're right.

Comment author: Will_Newsome 09 July 2010 01:52:40AM 2 points [-]

So, probably like most everyone else here, I sometimes get complaints (mostly from my ex-girlfriend, you can always count on them to point out your flaws) that I'm too logical and rational and emotionless and I can't connect with people or understand them et cetera. Now, it's not like I'm actually particularly bad at these things for being as nerdy as I am, and my ex is a rather biased source of information, but it's true that I have a hard time coming across as... I suppose the adjective would be 'warm', or 'human'. I've attributed a lot of this to a) my always-seeking-outside-confirmation-of-competence-style narcissism, b) my overly precise (for most people, not here) speech patterns. (For instance, when my ex said I suck at understanding people, I asked "Why do you believe that?" instead of the simpler and less clinical-psychologist-sounding "How so?" or "How?" or what not.) and c) accidentally randomly bringing up terms like 'a priori' which apparently most people haven't heard. I think there's more low hanging fruit here, though. Tsuyoku naritai!

Has anyone else tackled these problems? It's not that I lack charisma - I've managed to pull off that insane/passionate/brilliant thing among my friends - but I do seem to lack the ability to really connect with people - even people I really care about. Do Less Wrongers experience similar problems? Any advice? Or meta-advice about how to learn hard-to-describe dispositions? I've noticed that consciously acting like I was Regina Spektor in one situation or Richard Feynman in another seems to help, for instance.

Comment author: [deleted] 10 July 2010 04:48:30PM 3 points [-]

I think most people here have some sort of similar problem. Mine isn't being emotionless (ha!) but not knowing the right thing to say, putting my foot in my mouth, and so on. Occasionally coming across as a pedant, which is so embarrassing.

I may be getting better at it, though. One thing is: if you are a nerd (in the sense of passionate about something abstract) just roll with it. You will get along better with similar people. Your non-nerdy friends will know you're a nerd. I try to be as nice as possible so that when, inevitably, I say something clumsy or reveal that I'm ignorant of something basic, it's not taken too negatively. Nice but clueless is much better than arrogant.

And always wait for a cue from the other person to reveal something about yourself. Don't bring up politics unless he does; don't mention your interests unless he asks you; don't use long words unless he does.

I can't dance for shit, but various kinds of exercise are a good way to meet a broader spectrum of people.

Do I still feel like I'm mostly tolerated rather than liked? Yeah. It can be pretty depressing. But such is life.

As for dating -- the numbers are different from my perspective, of course, but so far I've found I'm not going to click really profoundly with guys who aren't intelligent. I don't mean that in a snobbish way, it's just a self-knowledge thing -- conversation is really fun for me, and I have more fun spending time with quick, talkative types. There's no point forcing yourself to be around people you don't enjoy.

Comment author: katydee 10 July 2010 09:01:32AM 1 point [-]

I have myself been accused of being an android or replicant on many occasions. The best way that I've found to deal with this is to make jokes and tell humorous anecdotes about the situation, especially ones that poke fun at myself. This way, the accusation itself becomes associated with the joke and people begin to find it funny, which makes it "unserious."

Comment author: knb 09 July 2010 09:30:02AM 2 points [-]

In my experience, something as simple as adding a smile can transform a demeanor otherwise perceived as "cold" or "emotionless" to "laid-back" or "easy-going".

Comment author: Kevin 09 July 2010 06:52:09AM *  4 points [-]

b) my overly precise (for most people, not here) speech patterns

The kind of ultra rational Bayesian lingustic patterns used around here would be considered obnoxiously intellectual and pretentious (and incomprehensible?) by most people. Practice mirroring the speech patterns of the people you are communicating with, and slip into rationalist talk when you need to win an argument about something important.

When I'm talking to street people, I say "man" a lot because it's something of a high honorific. Maybe in California I will need to start saying "dude", though man seems inherently more respectful.

Comment author: wedrifid 09 July 2010 02:29:22AM *  6 points [-]

I suggest a lot of practice talking to non-nerds or nerds who aren't in their nerd mode. (And less time with your ex!)

A perfect form of practice is dance. Take swing dancing lessons, for example. That removes the possibility of using your overwhelming verbal fluency and persona of intellectual brilliance. It makes it far easier to activate that part that is sometimes called 'human' but perhaps more accurately called 'animal'. Once you master maintaining the social connection in a purely non-verbal setting adding in a verbal component yet maintaining the flow should be far simpler.

Comment author: Will_Newsome 09 July 2010 02:34:53AM *  2 points [-]

I suggest a lot of practice talking to non-nerds or nerds who aren't in their nerd mode.

Non-nerdy people who are interesting are surprisingly difficult to find, and I have a hard time connecting with the ones I do find such that I don't get much practice in. I'm guessing that the biggest demographic here would be artists (musicians). Being passionate about something abstract seems to be the common denominator.

(And less time with your ex!)

Ha, perhaps a good idea, but I enjoy the criticism. She points out flaws that I might have missed otherwise. I wonder if one could market themselves as a professional personality flaw detector or the like. I'd pay to see one.

Once you master maintaining the social connection in a purely non-verbal setting adding in a verbal component yet maintaining the flow should be far simpler.

Interesting, I had discounted dancing because of its nonverbality. Thanks for alerting me to my mistake!

Comment author: Kevin 09 July 2010 06:45:10AM 1 point [-]

Interesting, I had discounted dancing because of its nonverbality.

In my last semester at college, I figured I should take fun classes while I could, so I took two one credit drumming classes. In African Drumming Ensemble, we spent 90% of the time doing complex group dances and not drumming, because the drumming was so much easier to learn than the dancing.

Being tricked into taking a dance class was broadly good for my social skills, not the least my confidence on a dance floor.

Comment author: wedrifid 09 July 2010 03:45:56AM 4 points [-]

Interesting, I had discounted dancing because of its nonverbality. Thanks for alerting me to my mistake!

I was using very similar reasoning when I suggested "non nerds or nerds not presently in nerd mode". The key is hide the abstract discussion crutch!

Ha, perhaps a good idea, but I enjoy the criticism. She points out flaws that I might have missed otherwise. I wonder if one could market themselves as a professional personality flaw detector or the like. I'd pay to see one.

Friends who are willing to suggest improvements (Tsuyoku naritai) sincerely are valuable resources! If your ex is able to point out a flaw then perhaps you could ask her to lead you through an example of how to have a 'warm, human' interaction, showing you the difference between that and what you usually do? Mind you, it is still almost certainly better to listen to criticism from someone who has a vested interest in your improvement rather than your acknowledgement of flaws. Like, say, a current girlfriend. ;)

Comment author: JoshuaZ 09 July 2010 02:14:44AM 2 points [-]

Date nerdier people? In general, many nerdy rational individuals have a lot of trouble getting a long with not so nerdy individuals. There's some danger that I'm other optimizing but I have trouble thinking how an educated rational individual would have be able to date someone who thought that there was something wrong with using terms like "a priori." That's a common enough term, and if someone uses a term that they don't know they should be happy to learn something. So maybe just date a different sort of person?

Comment author: Will_Newsome 09 July 2010 02:27:25AM *  1 point [-]

I wasn't talking mostly about dating, but I suppose that's an important subfield.

The topic you mention came up at the Singularity Institute Visiting Fellows house a few weeks. 3 or 4 guys, myself included, expressed a preference for girls who had specialized in some other area of life: gains from trade of specialized knowledge. And I just love explaining to a girl how big the universe is and how gold is formed in super novas... most people can appreciate that, even if they see no need for using the word 'a priori'. I don't mean average intelligence, but one standard deviation above the mean intelligence. Maybe more; I tend to underestimate people. There was 1 person who was rather happy with his relationship with a girl who was very like him. However, the common theme was that people who had more dating experience consistently preferred less traditionally intelligent and more emotionally intelligent girls (I'm not using that term technically, by the way), whereas those with less dating experience had weaker preferences for girls who were like themselves. Those with more dating experience also seemed to put much more emphasis on the importance of attractiveness instead of e.g. intelligence or rationality. Not that you have to choose or anything, most of the time. I'm going to be so bold as to claim that most people with little dating experience that believe they would be happiest with a rationalist girlfriend should update on expected evidence and broaden their search criteria for potential mates.

As for preferences of women, I'm sorry, but the sample size was too small for me to see any trends. (To be fair this was a really informal discussion, not an official SIAI survey of course. :) )

Important addendum: I never actually checked to see if any of the guys in the conversation had dated women who were substantially more intelligent than average, and thus they might not have been making a fair comparison (imagining silly arguments about deism versus atheism or something). I myself have never dated a girl that was 3 sigma intelligent, for instance. I'm mostly drawing my comparison from fictional (imagined) evidence.

Comment author: [deleted] 12 July 2010 02:03:52AM 0 points [-]

I think that the quality of relationships depends less on the fluid intelligence of the partners, or on anything else they might have in common, and more on their level of emotional maturity (empathy, non-self-absorption, communication skills, generosity), as well as their attachment to and affection for one another.

You may become more attached to, or feel more affection for, someone you believe to be intelligent, but then again you might achieve the same emotional connection through, for example, shared life experiences. Intelligence and common interests may make a mate more entertaining, but in my experience it's really not terribly important for my boyfriend to entertain me; we can always go see a movie or play a game together for entertainment.

I'm arguing, in short, that intelligence is mostly irrelevant to relationship quality.

On a more personal note, I can testify that, however much you might admire intelligence per se, it is a terrible idea to date someone who is nearly but not quite as intelligent as yourself, who is also crushingly insecure.

Comment author: JoshuaZ 09 July 2010 02:33:50AM 2 points [-]

I've dated females who were clearly less intelligent than I am, some about the same, and some clearly more intelligent. I'm pretty sure the last category was the most enjoyable (I'm pretty sure that rational intelligent nerdy females don't want want to date guys who aren't as smart as they are either). There may be issues with sample size.

Comment author: Will_Newsome 09 July 2010 02:36:56AM 0 points [-]

Hm, probably. I'm not sure what my priors would be, either. So my distribution's looking pretty flat at the moment, especially after your contrary evidence.

Comment author: WrongBot 09 July 2010 02:08:32AM 6 points [-]

"Fake it until you make it" is surprisingly good advice for this sort of thing. I had moderate self-esteem issues in my freshman year of college, so I consciously decided to pretend that I had very high self-esteem in every interaction I had outside of class. This may be one of those tricks that doesn't work for most people, but I found that using a song lyric (from a song I liked) as a mantra to recall my desired state of mind was incredibly helpful, and got into the habit of listening to that particular song before heading out to meet friends. (The National's "All The Wine" in this particular case. "I am a festival" was the mantra I used.)

That's in the same class of thing as acting like Regina Spektor or Feynman; if you act in a certain way consistently enough, your brain will learn that pattern and it will begin to feel more natural and less conscious. I don't worry about my self-esteem any more (in that direction, at least).

Comment author: ciphergoth 08 July 2010 02:39:19PM *  15 points [-]

A New York Times article on Robin Hanson and his wife Peggy Jackson's disagreement on cryonics:

http://www.nytimes.com/2010/07/11/magazine/11cryonics-t.html?ref=health&pagewanted=all

Comment author: Wei_Dai 08 July 2010 07:06:06PM 2 points [-]

I wonder if Peggy's apparent disvalue of Robin's immortality represents a true preference, and if so, how should an FAI take it into account while computing humanity's CEV?

Comment author: red75 08 July 2010 09:22:11PM *  1 point [-]

It seems plausible that "know more" part of EV should include result of modelling of applying CEV to humanity, i.e. CEV is not just result of aggregation of individuals' EVs, but one of fixed points of humans' CEV after reflection on results of applying CEV.

Maybe Peggy's model will see, that her preferences will result in unnecessary deaths and that death is no more important part for society to exist/for her children to prosper.

Comment author: Wei_Dai 08 July 2010 10:20:04PM 2 points [-]

It seems to me if it were just some factual knowledge that Peggy is missing, Robin would have been able to fill her in and thereby change her mind.

Of course Robin's isn't a superintelligent being, so perhaps there is an argument that would change Peggy's mind that Robin hasn't thought of yet, but how certain should we be of that?

Comment author: red75 08 July 2010 10:57:43PM *  2 points [-]

I meant something like embedding into culture where death is unnecessary, rather than directly arguing for that. Words aren't best communication channel for changing moral values. Will it be enough? I hope yes, if death of carriers of moral values isn't necessary condition for moral progress.

Edit: BTW, if CEV will be computed using humans' reflection on its application, then it means that FAI cannot passively combine all volitions, it must search for and somehow choose fixed point. Which rule should govern that process?

Comment author: Nick_Tarleton 08 July 2010 10:28:27PM *  6 points [-]

Communicating complex factual knowledge in an emotionally charged situation is hard, to say nothing of actually causing a change in deep moral responses. I don't think failure is strong evidence for the nonexistence of such information. (Especially since I think one of the most likely sorts of knowledge to have an effect is about the origin — evolutionary and cognitive — of the relevant responses, and trying to reach an understanding of that is really hard.)

Comment author: Wei_Dai 08 July 2010 11:18:49PM *  3 points [-]

You make a good point, but why is communicating complex factual knowledge in an emotionally charged situation hard? It must be that we're genetically programmed to block out other people's arguments when we're in an emotionally charged state. In other words, one explanation for why Robin has failed to change Peggy's mind is that Peggy doesn't want to know whatever facts or insights might change her mind on this matter. Would it be right for the FAI to ignored that "preference" and give Peggy's model the relevant facts or insights anyway?

ETA: This does suggest a practical advice: try to teach your wife and/or mom the relevant facts and insights before bringing up the topic of cryonics.

Comment author: Kevin 08 July 2010 11:36:03PM *  11 points [-]

You are underestimating just how enormously Peggy would have to change her mind. Her life's work involves emotionally comforting people and their families through the final days of terminal illness. She has accepted her own mortality and the mortality of everyone else as one of the basic facts of life. As no one has been resurrected yet, death still remains a basic fact of life for those that don't accept the information theoretic definition of death.

To change Peggy's mind, Robin would not just have to convince her to accept his own cryonic suspension, but she would have to be convinced to change her life's work -- to no longer spend her working hours convincing people to accept death, but to convince them to accept death while simultaneously signing up for very expensive and very unproven crazy sounding technology.

Changing the mind of the average cryonics-opposed life partner should be a lot easier than changing Peggy's mind. Most cryonics-opposed life partners have not dedicated their lives to something diametrically opposed to cryonics.

Comment deleted 08 July 2010 11:28:35PM *  [-]
Comment author: Larks 09 July 2010 12:56:57AM 1 point [-]

Better yet, sign up while you're single, and present it Fait accompli. It won't get her signed up, but I'd be willing to be she won't try to make you drop your subscription.

Comment author: lmnop 08 July 2010 11:37:30PM *  0 points [-]

Well the practical advice is being offered to LW, and I'd guess that most of the people here are not average IQ, and neither are their friends and family. I personally think it's a great idea to try and give someone the relevant factual background to understand why cryonics is desirable before bringing up the option. It probably wouldn't work, simply because almost all attempts to sell cryonics to anyone don't work, but it should at least decrease the probability of them reacting with a knee-jerk dismissal of the whole subject as absurd.

Comment deleted 08 July 2010 11:57:17PM *  [-]
Comment author: lmnop 09 July 2010 12:09:48AM 1 point [-]

I mostly agree with you. I would even expand your point to say that if you want to convince anyone (who isn't a perfect Bayesian) to do anything, the probability of success will almost always be higher if you use primarily emotional manipulation rather than rational argument. But cryonics inspires such strong negative emotional reactions in people that I think it would be nearly impossible to combat those with emotional manipulation of the type you describe alone. I haven't heard of anyone choosing cryonics for themselves without having to make a rational effort to override their gut response against it, and that requires understanding the facts. Besides, I think the type of males who choose cryonics tend to have female partners of at least above-average intelligence, so that should make the explanatory process marginally less difficult.

Comment author: Alicorn 08 July 2010 11:36:15PM 4 points [-]

Is this generalizable? Should I, too, threaten my loved ones with abandonment whenever they don't do what I think would be best?

Comment author: Alexandros 09 July 2010 09:48:19AM 1 point [-]

I don't think this is about doing what you think best, it's about allowing you to do what you think best. And yes, you should definitely threaten abandonment in these cases, or at least you're definitely entitled to threatening and/or practicing abandonment in such cases.

Comment author: steven0461 08 July 2010 10:42:06PM 1 point [-]

Yes -- calling it "factual knowledge" suggests it's only about the sort of fact you could look up in the CIA World Factbook, as opposed to what we would normally call "insight".

Comment author: Clippy 08 July 2010 07:22:06PM 3 points [-]

It should store a canonical human "base type" in a data structure somewhere. Then it should store the information about how all humans deviate from the base type, so that they can in principle be reconstituted as if they had just been through a long sleep.

Then it should use Peggy's body and Robin's body for fuel.

Comment author: WrongBot 08 July 2010 05:12:21PM 9 points [-]

While I'm not planning to pursue cryopreservation myself, I don't believe that it's unreasonable to do so.

Industrial coolants came up in a conversation I was having with my parents (for reasons I am completely unable to remember), and I mentioned that I'd read a bunch of stuff about cryonics lately. My mom then half-jokingly threatened to write me out of her will if I ever signed up for it.

This seemed... disproportionately hostile. She was skeptical of the singularity and my support for the SIAI when it came up a few weeks ago, but she's not particularly interested in the issue and didn't make a big deal about it. It wasn't even close to the level of scorn she apparently has for cryonics. When I asked her about it, she claimed she opposed it based on the physical impossibility of accurately copying a brain. My father and I pointed out that this would literally require the existence of magic, she conceded the point, mentioned that she still thought it was ridiculous, and changed the subject.

This was obviously a case of my mom avoiding her belief's true weak points by not offering her true objection, rationality failures common enough to deserve blog posts pointing them out; I wasn't shocked to observe them in the wild. What is shocking to me is that someone who is otherwise quite rational would feel so motivated to protect this particular belief about cryonics. Why is this so important?

That the overwhelming majority of those who share this intense motivation are women (it seems) just makes me more confused. I've seen a couple of explanations for this phenomenon, but they aren't convincing: if these people object to cryonics because they see it as selfish (for example), why do so many of them come up with fake objections? The selfishness objection doesn't seem like it would be something one would be penalized for making.

Comment deleted 08 July 2010 10:31:28PM *  [-]
Comment author: NancyLebovitz 09 July 2010 12:02:17AM 1 point [-]

Wanting cryo signals disloyalty to your present allies.

I don't see why you'd be showing disloyalty to those of your allies who are also choosing cryo.

Here are some more possible reasons for being opposed to cryo.

Loss aversion. "It would be really stupid to put in that hope and money and get nothing for it."

Fear that it might be too hard to adapt to the future society. (James Halperin's The First Immortal has it that no one gets thawed unless someone is willing to help them adapt. would that make cryo seem more or less attractive?)

And, not being an expert on women, I have no idea why there's a substantial difference in the proportions of men and women who are opposed to cryo.

Comment deleted 09 July 2010 12:08:33AM *  [-]
Comment author: Wei_Dai 09 July 2010 08:18:23PM 0 points [-]

It's seems to also be a signal of disloyalty/lower commitment to say, "No honey, I won't throw myself on your funeral pyre after you die." Why don't we similarly demand "Yes, I could keep on living, but I think life would be meaningless without you by my side, so I won't bother" in that case?

Comment author: Wei_Dai 08 July 2010 10:51:24PM 3 points [-]

Maybe the husband/son should preemptively play the "if you don't sign up with me, you're betraying me" card?

Comment author: wedrifid 09 July 2010 02:59:16AM *  1 point [-]

Maybe the husband/son should preemptively play the "if you don't sign up with me, you're betraying me" card?

If my spouse played that card too hard I'd sign up to cryonics then I'd dump them. ("Too hard" would probably mean more than one issue and persisting against clearly expressed boundaries.) Apart from the manipulative aspect it is just, well, stupid. At least manipulate me with "you will be abandoning me!" you silly man/woman/intelligent agent of choice.

Comment deleted 08 July 2010 11:07:18PM *  [-]
Comment author: lmnop 08 July 2010 11:25:12PM *  3 points [-]

In the case of refusing cryonics, I doubt that fear of social judgment is the largest factor or even close. It's relatively easy to avoid judgment without incurring terrible costs--many people signed up for cryonics have simply never mentioned it to the girls and boys in the office. I'm willing to bet that most people, even if you promised that their decision to choose cryonics would be entirely private, would hardly waver in their refusal.

Comment author: Will_Newsome 09 July 2010 01:30:32AM *  1 point [-]

For what it's worth Steven Kaas emphasized social weirdness as a decent argument against signing up. I'm not sure what his reasoning was, but given that he's Steven Kaas I'm going to update on expected evidence (that there is a significant social cost so signing up that I cannot at the moment see).

Comment author: Wei_Dai 09 July 2010 06:27:04AM 4 points [-]

I don't get why social weirdness is an issue. Can't you just not tell anyone that you've signed up?

Comment author: gwern 09 July 2010 06:45:43AM *  2 points [-]

The NYT article points out that you sometimes want other people to know - your wife's cooperation at the hospital deathbed will make it much easier for the Alcor people to wisk you away.

Comment author: Vladimir_Nesov 09 July 2010 08:19:40AM 2 points [-]

It's not an argument against signing up, unless the expected utility of the decision is borderline positive and it's specifically the increased probability of failure because of lack of additional assistance of your family that tilts the balance to the negative.

Comment author: JoshuaZ 08 July 2010 11:06:24PM 1 point [-]

Maybe the husband/son should preemptively play the "if you don't sign up with me, you're betraying me" card?

Voted up as an interesting suggestion. That said, I think that if anyone feels a need to be playing that card in a preemptive fashion then a relationship is probably not very functional to start with. Moreover, given that signing up is a change from the status quo I suspect that attempting to play that card would go over poorly in general.

Comment author: Wei_Dai 08 July 2010 11:39:09PM 0 points [-]

That said, I think that if anyone feels a need to be playing that card in a preemptive fashion then a relationship is probably not very functional to start with.

Can you expand on that? I'm not sure why this particular card is any worse than what people in functional relationships typically do.

Moreover, given that signing up is a change from the status quo I suspect that attempting to play that card would go over poorly in general.

Right, so sign up before entering the relationship, then play that card. :)

Comment author: JoshuaZ 09 July 2010 01:42:02AM 4 points [-]

Can you expand on that? I'm not sure why this particular card is any worse than what people in functional relationships typically do.

We may have different definitions of "functional relationship." I'd put very high on the list of elements of a functional relationship that people don't go out of there way to consciously manipulate each other over substantial life decisions.

Comment author: Wei_Dai 09 July 2010 08:29:03AM 1 point [-]

Um, it's a matter of life or death, so of course I'm going to "go out of my way".

As for "consciously manipulate", it seems to me that people in all relationships consciously manipulate each other all the time, in the sense of using words to form arguments in order to convince the other person to do what they want. So again, why is this particular form of manipulation not considered acceptable? Is it because you consider it a lie, that is, you don't think you would really feel betrayed or abandoned if your significant other decided not to sign up with you? (In that case would it be ok if you did think you would feel betrayed/abandoned?) Or is it something else?

Comment author: wedrifid 09 July 2010 09:51:23AM 2 points [-]

So again, why is this particular form of manipulation not considered acceptable?

It is a good question. The distinctive feature of this class of influence is the overt use of guilt and shame, combined with the projection of the speaker's alleged emotional state onto the actual physical actions of the recipient. It is a symptom relationship dynamic that many people consider immature and unhealthy.

Comment author: Wei_Dai 09 July 2010 08:56:00PM 0 points [-]

It is a symptom relationship dynamic that many people consider immature and unhealthy.

I'm tempted to keep asking why (ideally in terms of game theory and/or evolutionary psychology) but I'm afraid of coming across as obnoxious at this point. So let me just ask, do you think there is a better way of making the point, that from the perspective of the cryonicist, he's not abandoning his SO, but rather it's the other way around? Or do you think that its not worth bring up at all?

Comment author: lsparrish 08 July 2010 11:57:56PM *  5 points [-]

I would say that if you aren't yet married, be prepared to dump them if they won't sign up with you. Because if they won't, that is a strong signal to you that they are not a good spouse. These kinds of signals are important to pay attention to in the courtship process.

After marriage, you are hooked regardless of what decision they make on their own suspension arrangements, because it's their own life. You've entered the contract, and the fact they want to do something stupid does not change that. But you should consider dumping them if they refuse to help with the process (at least in simple matters like calling Alcor), as that actually crosses the line into betrayal (however passive) and could get you killed.

Comment author: Alicorn 08 July 2010 10:35:28PM 5 points [-]

If you're right, this suggests a useful spin on the disclosure: "I want you to run away with me - to the FUTURE!"

However, it was my dad, not my mom, who called me selfish when I brought up cryo.

Comment deleted 08 July 2010 10:40:35PM [-]
Comment author: rhollerith_dot_com 09 July 2010 04:21:50AM *  1 point [-]

I think that what would work is signing up before you start a relationship, and making it clear that it's a part of who you are.

Ah, but did you notice that that did not work for Robin? (The NYT article says that Robin discussed it with Peggy when they were getting to know each other.)

Comment author: Nisan 09 July 2010 12:54:27PM 4 points [-]

It "worked" for Robin to the extent that Robin got to decide whether to marry Peggy after they discussed cryonics. Presumably they decided that they preferred each other to hypothetical spouses with the same stance on cryonics.

Comment author: rhollerith_dot_com 09 July 2010 01:39:21PM *  0 points [-]

Thanks. (Upvoted.)

Comment author: NancyLebovitz 08 July 2010 06:09:16PM 3 points [-]

I don't have anything against cryo, so this are tentative suggestions.

Maybe going in for cryo means admitting how much death hurts, so there's a big ugh field.

Alternatively, some people are trudging through life, and they don't want it to go on indefinitely.

Or there are people they want to get away from.

However, none of this fits with "I'll write you out of my will". This sounds to me like seeing cryo as a personal betrayal, but I can't figure out what the underlying premises might be. Unless it's that being in the will implies that the recipient will also leave money to descendants, and if you aren't going to die, then you won't.

Comment author: Blueberry 08 July 2010 06:01:06PM *  2 points [-]

That the overwhelming majority of those who share this intense motivation are women (it seems) just makes me more confused.

Is there evidence for this? Specifically the "intense" part?

ETA: Did you ask her why she had such strong feelings about it? Was she able to answer?

Comment author: WrongBot 08 July 2010 06:19:55PM 0 points [-]

The evidence is largely anecdotal, I think. There are certainly stories of cryonics ending marriages out there.

I haven't yet asked her about it, but I plan to do so next time we talk.

Comment author: SilasBarta 08 July 2010 05:36:45PM *  6 points [-]

if these people object to cryonics because they see it as selfish (for example), why do so many of them come up with fake objections?

I -- quite predictably -- think this is a special case of the more general problem that people have trouble explaining themselves. You mom doesn't give her real reason because she can't (yet) articulate it. In your case, I think it's due to two factors: 1) part of the reasoning process is something she doesn't want to say to your face so she avoids thinking it, and 2) she's using hidden assumptions that she falsely assumes you share.

For my part, my dad's wife is nominally unopposed, bitterly noting that "It's your money" and then ominously adding that, "you'll have to talk about this with your future wife, who may find it loopy".

(Joke's on her -- at this rate, no woman will take that job!)

Comment author: cousin_it 08 July 2010 05:42:04PM *  0 points [-]

Sometime ago I offered this explanation for not signing up for cryo: I know signing up would be rational, but can't overcome my brain's desire to make me "look normal". I wonder whether that explanation sounds true to others here, and how many other people feel the same way.

Comment author: SilasBarta 08 July 2010 10:50:42PM *  0 points [-]

I'm in a typical decision-paralysis state. I want to sign up, I have the money, but I'm also interested in infinite banking, which requires you to get a whole-life plan [1], which would have to be coordinated, which makes it complicated and throws off an ugh field.

What I should probably do is just get the term insurance, sign up for cryo, and then buy amendments to the life insurance contract if I want to get into the infinite banking thing.

[1] Save your breath about the "buy term and invest the difference" spiel, I've heard it all before. The investment environment is a joke.

Comment author: mattnewport 08 July 2010 11:06:25PM 0 points [-]

I'm also interested in infinite banking, which requires you to get a whole-life plan

You mentioned this before and I had a quick look at the website and got the impression that it is fairly heavily dependent on US tax laws around whole life insurance and so is not very applicable to other countries. Have you investigated it enough to say whether my impression is accurate or if this is something that makes sense in other countries with differing tax regimes as well?

Comment author: SilasBarta 08 July 2010 11:15:15PM *  0 points [-]

I haven't read about the laws in other countries, but I suspect they at least share the aspect that it's harder to seize assets stored in such a plan, giving you more time to lodge an objection of they get a lien on it.

Comment author: mattnewport 08 July 2010 05:47:01PM 0 points [-]

For a variety of reasons I don't think cryonics is a good investment for me personally. The social cost of looking weird is certainly a negative factor, though not the only one.

Comment author: whpearson 08 July 2010 05:25:32PM 0 points [-]

If I was going to make a guess, I suspect that saying X is selfish can easily lead to the rejoinder, "It is my money I have the right to chose what to do with it," especially within the modern world. Saying X is selfish so it shouldn't be done, can also be seen as interfering with another persons business which is frowned upon in lots of social circles. It is also called moralising. So she may be unconsciously avoiding that response.

Comment author: WrongBot 08 July 2010 05:40:09PM 1 point [-]

This may be true in some cases, but I don't think it is in this one; my mom has no trouble moralizing on any other topic, even ones about which I care a great deal more than I do about cryonics. For example, she's criticized polyamory as unrealistic and bisexuality as non-existent on multiple occasions, both of which have a rather significant impact on how I live my life.

Comment author: whpearson 08 July 2010 05:53:28PM 1 point [-]

I wasn't there at the discussions, but those seem different types of statements than saying that they are "wrong/selfish" and that by implication you are a bad person for doing them. She is impugning your judgement in all cases rather than your character.

Comment author: WrongBot 08 July 2010 06:00:29PM 1 point [-]

An important distinction, it's true. I feel like it should make a difference in this situation that I declared my intention to not pursue cryopreservation, but I'm not sure that it does.

Either way, I can think of other specific occasions when my mom has specifically impugned my character as well as my judgment. ("Lazy" is the word that most immediately springs to mind, but there are others.)

It occurs to me that as I continue to add details my mom begins to look like a more and more horrible person; this is generally not the case.

Comment author: Vladimir_Nesov 08 July 2010 04:47:25PM *  1 point [-]

Good article overall. Gives a human feel to the decision of cryonics, in particular by focusing on an unfair assault it attracts (thus appealing cryonicist's status).

Comment author: mattnewport 08 July 2010 04:29:53PM 1 point [-]

The hostile wife phenomenon doesn't seem to have been mentioned much here. Is it less common than the article suggests or has it been glossed over because it doesn't support the pro-cryonics position? Or has it been mentioned and I wasn't paying attention?

Comment author: HughRistik 08 July 2010 05:29:48PM 0 points [-]

It was mentioned, and you weren't paying attention ;)

Comment author: mattnewport 08 July 2010 05:48:45PM 0 points [-]

I did think this was quite a likely explanation. As I'm not married the point would likely not have been terribly salient when reading about pros and cons.

Comment author: ata 08 July 2010 05:07:00PM *  1 point [-]

At last count (a while ago admittedly), most LWers were not married, and almost none were actually signed up for cryonics. So perhaps this phenomenon just isn't a salient issue to most people here.

Comment author: ciphergoth 09 July 2010 07:33:02AM 1 point [-]

Data point FWIW: my partners are far from convinced of the wisdom of cryonics, but they respect my choices. Much of the strongest opposition has come from my boyfriend, who keeps saying "why not just buy a lottery ticket? It's cheaper".

Comment author: gwern 09 July 2010 10:19:05AM 0 points [-]

Much of the strongest opposition has come from my boyfriend, who keeps saying "why not just buy a lottery ticket? It's cheaper".

Well, I hoped you showed him your expected utility calculations!

Comment author: ciphergoth 09 July 2010 11:23:28AM 1 point [-]

I'm afraid that isn't really a good fit for how he thinks about these things...

Comment author: Sniffnoy 09 July 2010 11:26:06AM 0 points [-]

It seems a bit odd to me that he would use the lottery comparison, in that case. Or no?

Comment author: Kingreaper 09 July 2010 11:36:21AM *  2 points [-]

They're both things with low probabilities of success, and extremely large pay-offs.

To someone with a certain view of the future, or a moderately low "maximum pay-off" threshold, the pay-off of cryonics could be the same as the pay-off for a lottery win.

At which point the lottery is a cheaper, but riskier, gamble. Again, if someone has a certain view of the future, or a "minimum probability" threshold (which both are under) then this difference in risk could be unnoticed in their thoughts.

At which point the two become identical, but one is more expensive.

It's quick-and-dirty thinking, but it's one easy way to end up with the connection, and doesn't involve any utility calculations (in fact, utility calculations would be an anathema to this sort of thinking)

Comment author: ciphergoth 09 July 2010 11:58:49AM 2 points [-]

One big barrier I hit in talking to some of those close to me about this is that I can't seem to explain the distinction between wanting the feeling of hope that I might live a very long time, and actually wanting to live a long time. Lots of people just say "if you want to believe in life after death, why not just go to church? It's cheaper".

Comment author: Nisan 09 July 2010 01:47:36PM 2 points [-]

Lots of people just say "if you want to believe in life after death, why not just go to church? It's cheaper".

I could see people saying that if they don't believe that cryonics has any chance at all of working. It might be hard to tell. If I told people "there's a good chance that cryonics will enable me to live for hundreds of years", I'm sure many would respond by nodding, the same way they'd nod if I told them that "there's a good chance that I'll go to Valhalla after I die". Sometimes respect looks like credulity, you know? Do you think that's what's happening here?

Comment author: RichardKennaway 09 July 2010 01:42:41PM 2 points [-]

And if you reply "I only want to believe in things that are true?"

Comment author: Sniffnoy 09 July 2010 12:26:08PM 0 points [-]

That's a bit scary.

Comment author: Morendil 08 July 2010 05:17:15PM 3 points [-]

I'm married and with kids, my wife supports my (so far theoretical only) interest in cryo. Though she says she doesn't want it for herself.

Comment author: Vladimir_Nesov 08 July 2010 03:25:53PM *  4 points [-]

A factual error:

when he first announced his intention to have his brain surgically removed from his freshly vacated cadaver and preserved in liquid nitrogen

I'm fairly sure that head-only preservation doesn't involve any brain-removal. It's interesting that in context the purpose of the phrase was to present a creepy image of cryonics, and so the bias towards the phrases that accomplish this goal won over the constraint of not generating fiction.

Comment author: wedrifid 08 July 2010 03:19:10PM 2 points [-]

That was very nearly terrifying.

Comment author: Kevin 08 July 2010 01:38:59AM 4 points [-]

Conway's Game of Life in HTML 5

http://sixfoottallrabbit.co.uk/gameoflife/

Comment author: RobinZ 08 July 2010 04:36:15AM *  1 point [-]

Playing Conway's Life is a great exercise - I recommend trying it, to anyone who hasn't. Feel free to experiment with different starting configurations. One simple one which produces a wealth of interesting effects is the "r pentomino":

Edit: Image link died - see Vladimir_Nesov's comment, below.

Comment author: Vladimir_Nesov 20 April 2012 03:06:32PM *  1 point [-]

The link to the image died, here it is:

Comment author: SilasBarta 07 July 2010 09:01:41PM *  2 points [-]

Information theory challenge: A few posters have mentioned here that the average entropy of a character in English is about one bit. This carries an interesting implication: you should be able to create an interface using only two of the keyboards keys, such that composing an English message requires just as many keystrokes, on average, as it takes on a regular keyboard.

To do so, you'd have to exploit all the regularities of English to offer suggestions that save the user from having to specify individual letters. Most of the entropy is in the intial charaters of a word or message, so you would probably spend more strokes on specifying those, but then make it up with some "autocomplete" feature for large portions of the message.

If that's too hard, it should be a lot easier to do a 3-input method, which only requires your message set to have an entropy of less than ~1.5 bits per character.

Just thought I'd point that out, as it might be something worth thinking about.

Comment author: gwern 07 July 2010 11:59:45PM 3 points [-]

Already done; see Dasher and especially its Google Tech Talk.

It doesn't reach the 0.7-1 bit per character limit, of course, but then, according to the Hutter challenge no compression program (online or offline) has.

Comment author: SilasBarta 08 July 2010 02:16:41AM 2 points [-]

Wow, and Dasher was invented by David MacKay, author of the famous free textbook on information theory!

Comment author: gwern 08 July 2010 02:18:48AM 1 point [-]

According to Google Books, the textbook mentions Dasher, too.

Comment author: Vladimir_M 07 July 2010 10:23:33PM *  0 points [-]

SilasBarta:

A few posters have mentioned here that the average entropy of a character in English is about one bit. This carries an interesting implication: you should be able to create an interface using only two of the keyboards keys, such that composing an English message requires just as many keystrokes, on average, as it takes on a regular keyboard.

One way to achieve this (though not practical for use in human use interfaces) would be to input the entire message bit by bit in some powerful lossless compression format optimized specifically for English text, and decompress it at the end of input. This way, you'd eliminate as much redundancy in your input as the compression algorithm is capable of removing.

The really interesting question, of course, is what are the limits of such technologies in practical applications. But if anyone has an original idea there, they'd likely cash in on it rather than post it here.

Comment author: Douglas_Knight 08 July 2010 12:03:37AM 1 point [-]

Shannon's estimate of 0.6 to 1.3 was based on having humans guess the next character out a 27 character alphabet including spaces but no other punctuation.

The impractical leading algorithm achieves 1.3 bits per byte on the first 10^8 bytes of wikipedia. This page says that stripping wikipedia down to a simple alphabet doesn't affect compression ratios much. I think that means that it hits Shannon's upper estimate. But it's not normal text (eg, redirects), so I'm not sure in which way its entropy differs. The practical (for computer, not human) algorithm bzip2 achieves 2.3 bits per byte on wikipedia and I find it achieves 2.1 bits per character on normal text (which suggests that wikipedia has more entropy and thus that the leading algorithm is beating Shannon's estimate).

Since Sniffnoy asked about arithmetic coding: if I understand correctly, this page claims that arithmetic coding of characters achieves 4 bits per character and 2.8 bits per character if the alphabet is 4-tuples.

Comment author: gwern 08 July 2010 12:12:57AM 1 point [-]

bzip2 is known to be both slow and not too great at compression; what does lzma-2 (faster & smaller) get you on Wikipedia?

(Also, I would expect redirects to play in a compression algorithm's favor compared to natural language. A redirect almost always takes the stereotypical form #REDIRECT[[foo]] or #redirect[[foo]]. It would have difficulty compressing the target, frequently a proper name, but the other 13 characters? Pure gravy.)

Comment author: Douglas_Knight 08 July 2010 12:48:31AM 0 points [-]

Here are the numbers for a pre-LZMA2 version of 7zip. It looks like LZMA is 2.0 bits per byte, while some other option is 1.7 bits per byte.

Yes, I would expect wikipedia to compress more than text, but it doesn't seem to be so. This is just for the first 100MB. At a gig, all compression programs do dramatically better, even off-the-shelf ones that shouldn't window that far. Maybe there is a lot of random vandalism early in the alphabet?

Comment author: gwern 08 July 2010 02:24:25AM 0 points [-]

Well, early on there are many weirdly titled pages, and I could imagine that the first 100MB includes all the '1958 in British Tennis'-style year articles. But intuitively that doesn't feel like enough to cause bad results.

Nor have any of the articles or theses I've read on vandalism detection noted any unusual distributions of vandalism; further, obvious vandalism like gibberish/high-entropy-strings are the very least long-lived forms of vandalism - long-lived vandalism looks plausible & correct, and indistinguishable from normal English even to native speakers (much less a compression algorithm).

A window really does sound like the best explanation, until someone tries out 100MB chunks from other areas of Wikipedia and finds they compress comparably to 1GB.

Comment author: Douglas_Knight 08 July 2010 03:59:55AM *  1 point [-]

bzip's window is 900k, yet it compresses 100MB to 29% but 1GB to 25%. Increasing the memory on 7zip's PPM makes a larger difference on 1GB than 100MB, so maybe it's the window that's relevant there, but it doesn't seem very plausible to me. (18.5% -> 17.8% vs 21.3% -> 21.1%)

Sporting lists might compress badly, especially if they contain times, but this one seems to compress well.

Comment author: gwern 23 July 2010 09:51:28AM 0 points [-]

That's very odd. If you ever find out what is going on here, I'd appreciate knowing.

Comment author: Christian_Szegedy 07 July 2010 09:21:06PM 2 points [-]

This is already exploited on cell phones to some extent.

Comment author: Sniffnoy 07 July 2010 09:21:06PM 0 points [-]

Doesn't arithmetic coding accomplish this? Or does that not count because it's unlikely a human could actually use it?

Comment author: SilasBarta 07 July 2010 09:29:43PM 1 point [-]

I don't think arithmetic coding achieves the 1 bit / character theoretical entropy of common English, as that requires knowledge of very complex boundaries in the probability distribution. If you know a color word is coming next, you can capitalize on it, but not letterwise.

Of course, if you permit a large enough block size, then it could work, but the lookup table would probably be umanageable.

Comment author: Sniffnoy 09 July 2010 11:31:48AM 1 point [-]

Yeah, I meant "arithmetic encoding with absurdly large block size"; I don't have a practical solution.

Comment author: [deleted] 07 July 2010 08:27:22PM 6 points [-]

Here are some assumptions one can make about how "intelligences" operate:

  1. An intelligent agent maintains a database of "beliefs"
  2. It has rules for altering this database according to its experiences.
  3. It has rules for making decisions based on the contents of this database.

and an assumption about what "rationality" means:

  1. Whether or not an agent is "rational" depends only on the rules it uses in 2. and 3.

I have two questions:

I think that these assumptions are implicit in most and maybe all of what this community writes about rationality, decision theory, and similar topics. Does anyone disagree? Or agree?

Have assertions 1-4, or something similar to them, been made explicit and defended or criticized anywhere on this website?

The background is that I've been kicking around the idea that a focus on "beliefs" is misleading when modeling intelligence or intelligent agents.

This is my first post, please tell me if I'm misusing any jargon.

Comment author: whpearson 07 July 2010 10:50:39PM *  1 point [-]

This also reminded me that I wanted to go through the Intentional Stance by Daniel Dennett and find the good bits. Also worth reading is the wiki page.

I think he would state that the model you describe comes from folk psychology.

A relevant passage

"We have all learned to take a more skeptical attitude to the dictates of folk physics, including those robust deliverances that persist in the face of academic science. Even the "undeniable introspective fact" that you can feel "centrifugal force" cannot save it, except for the pragmatic purposes of rough-and-ready understanding it has always served. The delicate question of just how we ought to express our diminished allegiance to the categories of folk physics has been a central topic in philosophy since the seventeenth century, when Descartes, Boyle and other began to ponder the meta-physical status of color, felt warmth, and other "secondary qualities". These discussions, while cautiously agnostic about folk physics have traditionally assumed as unchallenged the bedrock of folk-psychological counterpart categories: conscious perceptions of color, sensations of warmth, or beliefs about the external "world"."

In lesswrong people do tend to discard the perception and sensation parts of folk psychology, but keep the belief and goal concepts.

You might have trouble convincing people here, mainly because people are interested in what should be done by an intelligence, rather than what is currently done by humans. It is a lot harder to find evidence for what ought to be done rather than what is done.

Comment author: [deleted] 08 July 2010 12:26:13PM *  0 points [-]

Relevant and new-to-me, thanks.

I'd be interested to hear examples of things, related to this discussion, that people here would not be easily convinced of.

Comment author: whpearson 08 July 2010 04:06:53PM 1 point [-]

The problem I have found is determining what people accept as evidence about "intelligences".

If everyone thought intelligence was always somewhat humanlike (i.e. that if we can't localise beliefs in humans we shouldn't try to build AI with localised beliefs) then evidence about humans would constitute evidence about AI somewhat. In this case things like blind sight (mentioned in the Intentional Stance) would show that beliefs were not easily localised.

I think it fairly uncontroversial that beliefs aren't stored in one particular place in humans on Lesswrong. However because people are aware of the limitations of Humans, they think that they can design AI without the flaws so they do not constrain their designs to be humanlike, so that allows them to slip localised/programmatic beliefs back in.

To convince them that localised beliefs where incorrect/unworkable for all intelligences would require a constructive theory of intelligence.

Does that help?

Comment author: whpearson 07 July 2010 09:43:51PM *  -1 points [-]

I'm not so interested in decision theory. I criticised it a bit here

Edit: To give a bit more background to how I view rationality: An intelligence is a set of interacting programs some of which have control of the agent at any one time. The rationality of the agent depends upon the set of programs in control of the agent. The relationship between the set of programs and rationality of the system is somewhat environmentally specific.

Comment author: mstevens 07 July 2010 03:36:00PM 3 points [-]

Something I've been pondering recently:

This site appears to have two related goals:

a) How to be more rational yourself b) How to promote rationality in others

Some situations appear to trigger a conflict between these two goals - for example, you might wish to persuade someone they're wrong. You could either make a reasoned, rational argument as to why they're wrong, or a more rhetorical, emotional argument that might convince many but doesn't actually justify your position.

One might be more effective in the short term, but you might think the rational argument preferable as a long term education project, for example.

I don't really have an answer here, I'm just interested in the conflict and what people think.

Comment author: RobinZ 07 July 2010 03:42:43PM 0 points [-]

There is a third option of making a reasoned, rational meta-argument as to why the methods they were using to develop their position were wrong. I don't know how reliable it is, however.

Comment author: mstevens 07 July 2010 03:54:20PM *  2 points [-]

I've tried very informal related experiments - often in dealing with people it's necessary to challenge their assumptions about the world.

a) People's assumptions often seem to be somewhat subconscious, so there's significant effort to extract the assumptions they're making.

b) These assumptions seem to be very core to people's thinking and they're extremely resistant to being challenged on them.

My guess is that trying to change people's methods of thinking would be even more difficult than this.

EDIT: The first version of this I post talked more about challenging people's methods, I thought about this more and realised it was more assumptions, but didn't correctly edit everything to fit that. Now corrected.

Comment author: JohannesDahlstrom 07 July 2010 09:51:26AM *  16 points [-]

Drowning Does Not Look Like Drowning

Fascinating insight against generalizing from fictional evidence in a very real life-or-death situation.

Comment author: Kevin 07 July 2010 03:50:39AM 0 points [-]

Scientific study roundup: fish oil and mental health.

http://www.oilofpisces.com/depression.html

Comment author: RobinZ 07 July 2010 04:04:43AM 3 points [-]

Welcome to the Premier Omega-3/Fish Oil Site on the Web!

I feel cautious about the objectivity of this source. Other sources suggest health benefits to consumption of fish, but I want to be confident that my expert sources are not skewing the selection of research to promote.

Comment author: Kevin 07 July 2010 04:08:27AM 3 points [-]

Regardless of the souce, the evidence seems to be rather strong that fish oil does good things for the brain. If you can find any negative evidence about fish oil and mental health, I'd like to see it.

Comment author: RobinZ 07 July 2010 04:13:39AM 2 points [-]

I would like to know of risks associated with fish oil consumption as well. I am not aware of any. I am also not confident that any given site dedicated to the stuff would provide such information if or when it is available. I would suggest investigating independent sources of information (including but not limited to citations within and citations of referenced research) before drawing a confident conclusion.

Comment author: mattnewport 07 July 2010 07:35:54AM 2 points [-]

Fish oil (particularly cod liver oil) has high levels of vitamin A which is known to be toxic at high doses (above what would typically be consumed through fish oil supplements) and some studies suggest is harmful at lower doses (consistent with daily supplementation).

Comment author: RichardKennaway 07 July 2010 04:57:41AM 1 point [-]

Seth Roberts has written about omega-3s. I believe that somewhere in there he's talked about the possibility of mercury contamination in fish oils.

Comment author: RichardKennaway 07 July 2010 06:19:01AM 1 point [-]

Correction: the health risk he wrote about was PCBs in fish oil. For this reason he advocates flaxseed oil as a source of omega-3. Whether there is any real danger I don't know.

Comment author: Douglas_Knight 07 July 2010 06:28:50AM 1 point [-]

PCBs and omega-3s climb the food chain, so they're pretty well correlated. At some point I eyeballed a chart and decided that mercury was negatively correlated with omega-3s. No idea why.

Comment author: Kevin 07 July 2010 05:25:54AM 0 points [-]

I think this is one of those things that may have been a problem >5 years ago but recent regulation in the USA means that all fish oil on the market is now guaranteed to be safe.

Comment author: WrongBot 07 July 2010 05:30:34AM 1 point [-]

That's a rather... disproportionate level of faith to have in the US government's ability to regulate anything. I would not rely on American regulatory agencies for risk assessment in any field, much less one in which so little is currently known.

Comment author: Kevin 07 July 2010 07:48:43AM *  2 points [-]

http://www.nytimes.com/2009/03/24/health/24real.html

I don't have faith, but I have a broad knowledge of the FDA and their regulation of supplements. Usually when the US government works, it works. If evidence comes out that something is dangerous, the FDA usually pulls it from store shelves until it is fixed. Examples of supplements that at a certain point in past history were poisonous but are now correctly regulated are 5-HTP and Kava.

I knew that there were people claiming fish oil is bad, some of them loudly. I know that this was first disclaimed at least five years ago. I then intuited today, that if there ever did exist a safety issue with mercury in fish oil, it would have been fixed by now.

The meme that some fish oil pills are poisoned is mostly perpetuated by companies that are trying to sell you extra expensive fish oil pills.

Comment author: wedrifid 07 July 2010 08:59:34AM *  1 point [-]

(Voted up but...)

Examples of supplements that at a certain point in past history were poisonous but are now correctly regulated are 5-HTP and Kava.

I'd like to clarify that claim, because I took the totally wrong message from it the first read through. We're talking about regulation for quality control purposes and not control of the substance itself (I'm assuming). 5-Hydroxytryptophan itself is just an amino acid precursor that is available over the counter in the USA and Canada.

It is an intermediate product produced when Tryptophan is being converted into Seratonin. It was Tryptophan which was banned by the FDA due to association with EMS. They cleared that up eventually once they established that the problem was with the filtering process of a major manufacturer, not the substance itself. I don't think they ever got around to banning 5-HTP, even though the two only differ by one enzymatic reaction.

In general it is relatively hard to mess yourself up with amino acid precursors, even though Seratonin is the most dangerous neurotransmitter to play with. In the case of L-Tryptophan and 5-HTP care should be taken when combining it with SSRIs and MAO-A inhibitors. ie. Take way way less for the same effect or just "DO NOT MESS WITH SERATONIN!" (in slightly shaky handwriting).

Let me know if you meant something different from the above. Also, what is the story with Kava? All I know is that it is a mild plant based supplement that mildly sedates/counters anxiety/reduces pain, etc. Has it had quality issues too?

Comment author: Kevin 07 July 2010 05:49:48PM 3 points [-]

Thanks for the clarification, yes, by 5-HTP I meant tryptophan.

Serotonin has serious drug interactions with SSRIs and MAOIs, but otherwise is decidedly milder than pharmaceutical anti-depressants. It's effects are more comparable to melatonin than prozac

Kava is a plant that counters anxiety, and it is rather effective at doing so but very short lasting. It causes no physical addiction, which is one of the reasons it is on the FDA's Generally Recognized as Safe list. All kava on the market today is sourced from kava root. Kava has a great deal of native/indigenous use, and those people always make their drinks from kava root, throwing away the rest of the plant.

The rest of the plant contains active substances, so in their infinite wisdom, a Western company bought up the cheap kava leaf remnants and made extracts. It turns out that kava leafs have ingredients that cause large amounts of liver damage, but the roots are relatively harmless.

Kava root still isn't good for the liver, but it is less damaging than alcohol or acetaminophen. It is a bad idea to regularly mix it with alcohol or acetaminophen or other things that are bad for the liver, though.

Comment author: wedrifid 07 July 2010 05:52:34AM 0 points [-]

I share your distrust of the regulatory ability of the US government, particularly the FDA. I further lament the ability of the FDA to damage the regulatory procedures worldwide with their incompetence (or more accurately their lost purpose). In the case of Kevin's specific reference to regulation I suspect even the FDA could manage it. While research on the effects of large doses of EPA and DHA (Omega3) may be scant, understanding of mercury content itself is fairly trivial. I'm taking it that Kevin is referring specifically to quality assurance regarding mercury levels which is at least plausible (given litigation risks for violations).

Comment author: NancyLebovitz 07 July 2010 07:23:58AM 2 points [-]

Stored riff here: I think the world would be a better place if people had cheap handy means of doing quantitative chemical tests. I'm not sure how feasible it is, though I think there's a little motion in that direction.

Comment author: wedrifid 07 July 2010 07:27:03AM 1 point [-]

I would love to have that available, either as a product or a readily accessible service.

Comment author: wedrifid 07 July 2010 05:03:23AM 3 points [-]

(I note that mercury concentration is subject to heavy quality control measures. Quality fish oil supplements will include credible guarantees regarding mercury levels, based of independent testing. This is, of course, something to consider when buying cheap sources from some obscure place.)

Comment author: RobinZ 07 July 2010 05:03:17AM 0 points [-]

Mercury is a known problem with fish in general, agreed. Content varies somewhat with species, I have heard.

Comment author: Cyan 07 July 2010 02:03:53AM *  3 points [-]

I love that on LW, feeding the trolls consists of writing well-argued and well-supported rebuttals.

Comment author: kpreid 07 July 2010 02:13:19AM 4 points [-]

This is not a distortion of the original meaning. “Feeding the trolls” is just giving them replies of any sort — especially if they're well-written, because you’re probably investing more effort than the troll.

Comment author: Cyan 07 July 2010 02:52:28AM 0 points [-]

I didn't intend to imply otherwise.

Comment author: JoshuaZ 07 July 2010 02:08:58AM 2 points [-]

I don't think this is unique to LW at all. I've seen well-argued rebuttals to trolls labeled as feeding in many different contexts including Slashdot and the OOTS forum.

Comment author: Vladimir_Nesov 07 July 2010 07:54:26AM *  1 point [-]

We must aspire to a greater standard, with troll-feeding replies being troll-aware of their own troll-awareness.

Comment author: Cyan 07 July 2010 02:55:36AM 0 points [-]

I didn't mean to imply that it was unique to LW.

Comment author: hegemonicon 06 July 2010 05:14:17PM 5 points [-]

Poking around on Cosma Shalizi's website, I found this long, somewhat technical argument for why the general intelligence factor, g, doesn't exist.

The main thrust is that g is an artifact of hierarchal factor analysis, and that whenever you have groups of variables that have positive correlations between them, a general factor will always appear that explains a fair amount of the variance, whether it a actually exists or not.

I'm not convinced, mainly because it strikes me as unlikely that an error of this type would persist for so long, and that even his conception of intelligence as a large number of separate abilities would need some sort of high level selection and sequencing function. But neither of those are particularly compelling reasons for disagreement - can anyone more familiar with the psychological/statistical territory shed some light?

Comment author: gwern 03 April 2013 11:19:29PM 4 points [-]

Here is a useful post directly criticizing Shalizi's claims: http://humanvarieties.org/2013/04/03/is-psychometric-g-a-myth/

Comment author: hegemonicon 07 July 2010 02:54:46PM 10 points [-]

I pointed this out to my buddy who's a psychology doctoral student, his reply is below:

I don't know enough about g to say whether the people talking about it are falling prey to the general correlation between tests, but this phenomenon is pretty well-known to social science researchers.

I do know enough about CFA and EFA to tell you that this guy has an unreasonable boner for CFA. CFA doesn't test against truth, it tests against other models. Which means it only tells you whether the model you're looking at fits better than a comparator model. If that's a null model, that's not a particularly great line of analysis.

He pretty blatantly misrepresents this. And his criticisms of things like Big Five are pretty wild. Big Five, by its very nature, fits the correlations extremely well. The largest criticism of Big Five is that it's not theory-driven, but data-driven!

But my biggest beef has got to be him arguing that EFA is not a technique for determining causality. No shit. That is the very nature of EFA -- it's a technique for loading factors (which have no inherent "truth" to them by loading alone, and are highly subject to reification) in order to maximize variance explained. He doesn't need to argue this point for a million words. It's definitional.

So regardless of whether g exists or not, which I'm not really qualified to speak on, this guy is kind of a hugely misleading writer. MINUS FIVE SCIENCE POINTS TO HIM.

Comment author: satt 07 July 2010 11:28:49AM *  7 points [-]

But neither of those are particularly compelling reasons for disagreement - can anyone more familiar with the psychological/statistical territory shed some light?

Shalizi's most basic point — that factor analysis will generate a general factor for any bunch of sufficiently strongly correlated variables — is correct.

Here's a demo. The statistical analysis package R comes with some built-in datasets to play with. I skimmed through the list and picked out six monthly datasets (72 data points in each):

It's pretty unlikely that there's a single causal general factor that explains most of the variation in all six of these time series, especially as they're from mostly non-overlapping time intervals. They aren't even that well correlated with each other: the mean correlation between different time series is -0.10 with a std. dev. of 0.34. And yet, when I ask R's canned factor analysis routine to calculate a general factor for these six time series, that general factor explains 1/3 of their variance!

However, Shalizi's blog post covers a lot more ground than just this basic point, and it's difficult for me to work out exactly what he's trying to say, which in turn makes it difficult to say how correct he is overall. What does Shalizi mean specifically by calling g a myth? Does he think it is very unlikely to exist, or just that factor analysis is not good evidence for it? Who does he think is in error about its nature? I can think of one researcher in particular who stands out as just not getting it, but beyond that I'm just not sure.

Comment author: RobinZ 02 March 2011 07:19:46PM 0 points [-]

Belatedly: Economic development (including population growth?) is related to CO2, lung deaths, international airline passengers, average air temperatures (through global warming), and car accidents.

Comment author: HughRistik 07 July 2010 06:00:15PM *  3 points [-]

In your example, we have no reason to privilege the hypothesis that there is an underlying causal factor behind that data. In the case of g, wouldn't its relationships to neurobiology be a reason to give a higher prior probability to the hypothesis that g is actually measuring something real? These results would seem surprising if g was merely a statistical "myth."

Comment author: satt 07 July 2010 07:14:30PM 7 points [-]

In the case of g, wouldn't its relationships to neurobiology be a reason to give a higher prior probability to the hypothesis that g is actually measuring something real?

The best evidence that g measures something real is that IQ tests are highly reliable, i.e. if you get your IQ or g assessed twice, there's a very good correlation between your first score and your second score. Something has to generate the covariance between retestings; that g & IQ also correlate with neurobiological variables is just icing on the cake.

To answer your question directly, g's neurobiological associations are further evidence that g measures something real, and I believe g does measure something real, though I am not sure what.

These results would seem surprising if g was merely a statistical "myth."

Shalizi is, somewhat confusingly, using the word "myth" to mean something like "g's role as a genuine physiological causal agent is exaggerated because factor analysis sucks for causal inference", rather than its normal meaning of "made up". Working with Shalizi's (not especially clear) meaning of the word "myth", then, it's not that surprising that g correlates with neurobiology, because it is measuring something — it's just not been proven to represent a single causal agent.

Personally I would've preferred Shalizi to use some word other than "myth" (maybe "construct") to avoid exactly this confusion: it sounds as if he's denying that g measures anything, but I don't believe that's his intent, nor what he actually believes. (Though I think there's a small but non-negligible chance I'm wrong about that.)

Comment author: RobinZ 07 July 2010 02:58:42PM 2 points [-]

By the way, welcome to Less Wrong! Feel free to introduce yourself on that thread!

If you haven't been reading through the Sequences already, there was a conversation last month about good, accessible introductory posts that has a bunch of links and links-to-links.

Comment author: satt 07 July 2010 03:29:08PM 2 points [-]

Thank you!

Comment author: hegemonicon 07 July 2010 02:49:44PM 2 points [-]

From what I can gather, he's saying all other evidence points to a large number of highly specialized mental functions instead of one general intelligence factor, and that psychologists are making a basic error by not understanding how to apply and interpret the statistical tests they're using. It's the latter which I find particularly unlikely (not impossible though).

Comment author: satt 07 July 2010 04:41:29PM 1 point [-]

You might be right. I'm not really competent to judge the first issue (causal structure of the mind), and the second issue (interpretation of factor analytic g) is vague enough that I could see myself going either way on it.

Comment author: RobinZ 06 July 2010 07:14:09PM 0 points [-]

I don't think it's surprising that an untenable claim could persist within a field for a long time, once established. Pluto was called a planet for seventy-six years.

I've no idea whether the critique of g is accurate, however.

Comment author: mkehrt 07 July 2010 09:12:22AM 3 points [-]

That's a bizarre choice of example. The question of whether Pluto is a planet is entirely a definitional one; the IAU could make it one by fiat if they chose. There's no particular reason for it not to be one, except that the IAU felt the increasing number of tranNeptunian objects made the current definition awkward.

Comment author: RobinZ 07 July 2010 11:45:10AM *  4 points [-]

"[E]ntirely a definitional" question does not mean "arbitrary and trivial" - some definitions are just wrong. EY mentions the classic example in Where to Draw the Boundary?:

Once upon a time it was thought that the word "fish" included dolphins. Now you could play the oh-so-clever arguer, and say, "The list: {Salmon, guppies, sharks, dolphins, trout} is just a list - you can't say that a list is wrong. I can prove in set theory that this list exists. So my definition of fish, which is simply this extensional list, cannot possibly be 'wrong' as you claim."

Or you could stop playing nitwit games and admit that dolphins don't belong on the fish list.

Honestly, it would make the most sense to draw four lists, like the Hayden Planetarium did, with rocky planets, asteroids, gas giants, and Kuiper Belt objects each in their own category, but it is obviously wrong to include everything from Box 1 and Box 3 and one thing from Box 4. The only reason it was done is because they didn't know better and didn't want to change until they had to.

Comment author: mkehrt 08 July 2010 12:17:12AM 7 points [-]

You (well, EY) make a good point, but I think neither the Pluto remark nor the fish one is actually an example of this.

In the case of Pluto, the transNeptunians and the other planets seem to belong in a category that the asteroids don't. They're big and round! Moreover, they presumably underwent a formation process that the asteroid belt failed too complete in the same way (or whatever the current theory of formation of the asteroid belt is; I think that it involves failure to form a "planet" due to tidal forces from Jupiter?). Of course there are border cases like Ceres, but I think there is a natural category (whatever that means!) that includes the rocky planets, gas giants and Kuiper Belt objects that does not include (most) asteroids and comets.

On the fish example, I claim that the definition of "fish" that includes the modern definition of fish union the cetaceans is a perfectly valid natural category, and that this is therefore an intensional definition. "Fish" are all things that live in the water, have finlike or flipperlike appendages and are vaguely hydrodynamic. The fact that such things do not all share a comment descent* is immaterial to the fact that they look the same and act the same at first glance. As human knowledge has increased, we have made a distinction between fish and things that look like fish but aren't, but we reasonably could have kept the original definition of fish and called the scientific concept something else, say "piscoids".

*well, actually they do, but you know approximately what I mean.

Comment author: wnoise 09 July 2010 09:34:54PM 1 point [-]

The fact that such things do not all share a comment descent* *well, actually they do, but you know approximately what I mean.

The usual term is "monophyletic".

Comment author: mkehrt 09 July 2010 11:55:09PM 1 point [-]

Yes, but neither fish nor (fish union cetaceans) is monphylatic. The decent tree rooted at the last common ancestor of fish also contains tetrapods and decent tree rooted at the last common ancestor of tetrapods contains the cetaceans.

I am not any sort of biologist, so I am unclear on the terminological technicalties, which is why I handwaved this in my post above.

Comment author: Emile 10 July 2010 03:32:56PM 2 points [-]

Fish are a paraphyletic group.

Comment author: wedrifid 08 July 2010 02:56:45AM 2 points [-]

I'm inclined to agree. Having a name for 'things that naturally swim around in the water, etc' is perfectly reasonable and practical. It is in no way a nitwit game.

Comment author: NancyLebovitz 08 July 2010 02:36:04AM 3 points [-]

Nitpick: if in your definition of fish, you mean that they need to both have fins or flippers and be (at least) vaguely hydrodynamic, I don't think seahorses and puffer fish qualify.

Comment author: cousin_it 06 July 2010 06:12:26PM *  7 points [-]

I think this is one of the few cases where Shalizi is wrong. (Not an easy thing to say, as I'm a big fan of his.)

In the second part of the article he generates synthetic "test scores" of people who have three thousand independent abilities - "facets of intelligence" that apply to different problems - and demonstrates that standard factor analysis still detects a strong single g-factor explaining most of the variance between people. From that he concludes that g is a "statistical artefact" and lacks "reality". This is exactly like saying the total weight of the rockpile "lacks reality" because the weights of individual rocks are independent variables.

As for the reason why he is wrong, it's pretty clear: Shalizi is a Marxist (fo' real) and can't give an inch to those pesky racists. A sad sight, that.

Comment author: Vladimir_M 07 July 2010 04:56:36AM *  7 points [-]

cousin_it:

A sad sight, that.

Indeed. A while ago, I got intensely interested in these controversies over intelligence research, and after reading a whole pile of books and research papers, I got the impression that there is some awfully bad statistics being pushed by pretty much every side in the controversy, so at the end I was left skeptical towards all the major opposing positions (though to varying degrees). If there existed a book written by someone as smart and knowledgeable as Shalizi that would present a systematic, thorough, and unbiased analysis of this whole mess, I would gladly pay $1,000 for it. Alas, Shalizi has definitely let his ideology get the better of him this time.

He also wrote an interesting long post on the heritability of IQ, which is better, but still clearly slanted ideologically. I recommend reading it nevertheless, but to get a more accurate view of the whole issue, I recommend reading the excellent Making Sense of Heritability by Neven Sesardić alongside it.

Comment author: satt 07 July 2010 02:22:12PM 3 points [-]

If there existed a book written by someone as smart and knowledgeable as Shalizi that would present a systematic, thorough, and unbiased analysis of this whole mess, I would gladly pay $1,000 for it.

There is no such book (yet), but there are two books that cover the most controversial part of the mess that I'd recommend: Race Differences in Intelligence (1975) and Race, IQ and Jensen (1980). They are both systematic, thorough, and about as unbiased as one can reasonably expect on the subject of race & IQ. On the down side, they don't really cover other aspects of the IQ controversies, and they're three decades out of date. (That said, I personally think that few studies published since 1980 bear strongly on the race & IQ issue, so the books' age doesn't matter that much.)

Comment author: Vladimir_M 08 July 2010 08:17:18AM 3 points [-]

Yes, among the books on the race-IQ controversy that I've seen, I agree that these are the closest thing to an unbiased source. However, I disagree that nothing very significant has happened in the field since their publication -- although unfortunately, taken together, these new developments have led to an even greater overall confusion. I have in mind particularly the discovery of the Flynn effect and the Minnesota adoption study, which have made it even more difficult to argue coherently either for a hereditarian or an environmentalist theory the way it was done in the seventies.

Also, even these books fail to present a satisfactory treatment of some basic questions where a competent statistician should be able to clarify things fully, but horrible confusion has nevertheless persisted for decades. Here I refer primarily to the use of the regression to the mean as a basis for hereditarian arguments. From what I've seen, Jensen is still using such arguments as a major source of support for his positions, constantly replying to the existing superficial critiques with superficial counter-arguments, and I've never seen anyone giving this issue the full attention it deserves.

Comment author: satt 08 July 2010 09:57:46PM *  1 point [-]

However, I disagree that nothing very significant has happened in the field since their publication

Me too! I just don't think there's been much new data brought to the table. I agree with you in counting Flynn's 1987 paper and the Minnesota followup report, and I'd add Moore's 1986 study of adopted black children, the recent meta-analyses by Jelte Wicherts and colleagues on the mean IQs of sub-Saharan Africans, Dickens & Flynn's 2006 paper on black Americans' IQs converging on whites' (and at a push, Rushton & Jensen's reply along with Dickens & Flynn's), Fryer & Levitt's 2007 paper about IQ gaps in young children, and Fagan & Holland's papers (2002, 2007, 2009) on developing tests where minorities score equally to whites. I guess Richard Lynn et al.'s papers on the mean IQ of East Asians count as well, although it's really the black-white comparison that gets people's hackles up.

Having written out a list, it does looks longer than I expected...although it's not much for 30-35 years of controversy!

Also, even these books fail to present a satisfactory treatment of some basic questions where a competent statistician should be able to clarify things fully, but horrible confusion has nevertheless persisted for decades. Here I refer primarily to the use of the regression to the mean as a basis for hereditarian arguments.

Amen. The regression argument should've been dropped by 1980 at the latest. In fairness to Flynn, his book does namecheck that argument and explain why it's wrong, albeit only briefly.

Comment author: NancyLebovitz 08 July 2010 11:34:59AM *  0 points [-]

What would appropriate policy be if we just don't know to what extent IQ is different in different groups?

Comment author: Vladimir_M 09 July 2010 08:25:49AM *  4 points [-]

Well, if you'll excuse the ugly metaphor, in this area even the positive questions are giant cans of worms lined on top of third rails, so I really have no desire to get into public discussions of normative policy issues.

Comment author: Morendil 07 July 2010 06:28:53AM 3 points [-]

long post on the heritability of IQ, which is better, but still clearly slanted ideologically

OK, I'll bite. Can you point to specific parts of that post which are in error owing to ideologically motivated thinking?

Comment author: Vladimir_M 07 July 2010 07:29:34AM *  10 points [-]

Morendil:

Can you point to specific parts of that post which are in error owing to ideologically motivated thinking?

A piece of writing biased for ideological reasons doesn't even have to have any specific parts that can be shown to be in error per se. Enormous edifices of propaganda can be constructed -- and have been constructed many times in history -- based solely on the selection and arrangement of the presented facts and claims, which can all be technically true by themselves.

In areas that arouse strong ideological passions, all sorts of surveys and other works aimed at broad audiences can be expected to suffer from this sort of bias. For a non-expert reader, this problem can be recognized and overcome only by reading works written by people espousing different perspectives. That's why I recommend that people should read Shalizi's post on heritability, but also at least one more work addressing the same issues written by another very smart author who doesn't share the same ideological position. (And Sesardić's book is, to my knowledge, the best such reference about this topic.)

Instead of getting into a convoluted discussion of concrete points in Shalizi's article, I'll just conclude with the following remark. You can read Shalizi's article, conclude that it's the definitive word on the subject, and accept his view of the matter. But you can also read more widely on the topic, and see that his presentation is far from unbiased, even if you ultimately conclude that his basic points are correct. The relevant literature is easily accessible if you just have internet and library access.