Open Thread: June 2010

5 Post author: Morendil 01 June 2010 06:04PM

To whom it may concern:

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

(After the critical success of part II, and the strong box office sales of part III in spite of mixed reviews, will part IV finally see the June Open Thread jump the shark?)

Comments (651)

Comment author: byrnema 27 June 2010 06:47:33AM *  1 point [-]

I'm not certain this comment will be coherent, but I would like to compose it before I lose my train of thought. (I'm in an atypical mental state, so I easily could forget the pieces when feeling more normal.) The writing below sounds rather choppy and emphatic, but I'm actually feeling neutral and unconvinced. I wonder if anyone would be able to 'catch this train' and steer it somewhere else perhaps..?

It's an argument for dualism. Here is some background:


I've always been a monist: believing that everything should be coherent from within this reality. This is the idea that if things don't make sense, it is due to limited knowledge and a limited brain, not an incomplete universe. (Where the universe is the physical material world.)

While composing Less Wrong comments, I've often thought about what an incomplete universe would look like. (Since this is what dualists claim -- what do they mean by something existing differently or beyond material existence?)

I've written before that a simulation (a simulation is a reality S that is a subset of something larger) is just as good as (or the same as) "reality" if the simulation is complete within itself. That is, if an agent within the simulation would find that in principle everything within the simulation is coherent and can be understood from within the simulation. Importantly, there is no hint within the simulation of anything existing outside the simulation. (For example, in the multiple worlds theory, if the many worlds don't interact, each world is its own independent complete reality. The worlds are simulated within a larger entity of all the worlds.)

When physical materialists claim that the physical material world is our entire reality, they are claiming that the physical material world is a reality X, and you cannot deduce anything beyond X from within X. That is, there doesn't exist anything but X, as far as we're concerned. (We can speculate about many worlds, but unless the worlds interact, one world cannot deduce the others.) I've always found this to be obvious, because if you can deduce anything beyond X from within X, then what you've deduced is part of the physical material world (because you deduced it, through interaction) and it's part of X after all.

(end of background material)


It just occurred to me that we do have evidence that our physical material world X is incomplete. So I've stumbled on this argument for dualism. It's actually a very old one, but approached from a different angle. As I said, I stumbled upon it.

It's the problem of existence. Being a monist means believing that if things don't make sense, it is due to limited knowledge and a limited brain. But the problem of existence is such that no amount of knowledge will solve it: there's nothing we could ever learn (or even believe) within X that would solve this problem. Not a complete understanding of the physics of the beginning of the universe. Not even theism!

I cannot understand what the answer to the problem could possibly be, but I think that I can understand that there is no answer possible within X. So to the extent that I am correct that this problem is not in theory solvable in X means that X is incomplete.

I could be incorrect about whether this problem is in principle unsolvable in X. But I am relatively certain of it, on the same level as having confidence in logic. If I lose confidence in logic, I have nothing to reason with. So for now, I would find it more reasonable to guess that I'm in a simulation of some kind where this particular conundrum is embedded. X is a subset of a larger reality Y where existence is explained.

Given what we know about X, and the problem of existence, what can we deduce about the larger universe Y where existence is explained? Anything? What about deducing anything from the peculiar fact that X is missing information about existence?

Comment author: Blueberry 27 June 2010 11:08:12AM 0 points [-]

By "problem of existence" you mean why we exist and how we came to exist? Why do you think that can't be answered within our world? And what do you think a world would look like if you could solve the problem in it?

Comment author: byrnema 27 June 2010 05:55:59PM 0 points [-]

By "problem of existence" you mean why we exist and how we came to exist?

Yes. Why and how anything exists, and what existence is.

Why do you think that can't be answered within our world?

The reason that I think this problem can't be answered within our world is that the lack of an answer doesn't seem to be a matter of lack of information. It's a unique question in that although it seems to be a reasonable question, there's no possibility of an answer to this question, not even a false one.

It's a reasonable question because X is a causal reality, so it is reasonable to ask what caused X. There's no possibility of an answer to the question because causality is an arrow that always requires a point of departure. If you say the universe was created by a spark, and the rest followed by mathematics and logical necessity, still, what created that spark?

Religions have creation stories, but they explain the creation of X by the creation of X outside X. So creation stories don't resolve the conundrum of creation, they just move creation to someplace outside experience, where we cannot expect to understand anything. This may represent a universal insight that the existence of X cannot be explained within X.

And what do you think a world would look like if you could solve the problem in it?

This is analogous to being in flatland and wondering about edges. I suppose the main mysterious thing about the larger universe Y would be acausality. Here within X, it seems to be a rule, if not a logical principle, that everything is determined by something else. If something were to happen spontaneously, how did it decide to? What is the rule or pattern for its spontaneous appearance? These are all reasonable questions within X. Somehow Y gets around them.

Comment author: Blueberry 28 June 2010 08:31:18AM 0 points [-]

There's no possibility of an answer to the question because causality is an arrow that always requires a point of departure.

What do you think of the following answer? There is some evidence that backward time travel may be possible under some circumstances in a way that is compatible with general relativity. So suppose, many years in the future, a team of physicists and engineers creates a wormhole in the universe and sends something back to the time of the Big Bang, causing it and creating our universe. That way, it's all self-contained.

Comment author: byrnema 28 June 2010 04:42:51PM 0 points [-]

Self-contained is good, though it doesn't resolve the existence problem. (What is the appropriate cliché there ... you can't pull yourself out of quicksand by pulling on your boots?)

Backward time travel itself opens up a number of wonderful possibilities, including universe self-reflection and the possibility of a post-hoc framework of objective value.

Comment author: wedrifid 28 June 2010 05:24:45PM 0 points [-]

Backward time travel itself opens up a number of wonderful possibilities, including universe self-reflection and the possibility of a post-hoc framework of objective value.

It also makes encryption more difficult!

Comment author: ata 27 June 2010 07:12:32AM *  1 point [-]

I don't see where dualism comes in. Specifically what kind of dualism are you talking about?


Being a monist means believing that if things don't make sense, it is due to limited knowledge and a limited brain. But the problem of existence is such that no amount of knowledge will solve it: there's nothing we could ever learn (or even believe) within X that would solve this problem. ... So to the extent that I am correct that this problem is not in theory solvable in X means that X is incomplete.

A problem being unsolvable within some system does not imply that there is some outer system where it can be solved. Take the Halting Problem, for example: there are programs such that we cannot prove whether or not they will never halt, and this itself is provable. Yet there is a right answer in any given instance — a program will halt or it won't — but we can never know in some cases.

That you say "I cannot understand what the answer to the problem could possibly be" suggests that it is a wrong question. Ask "Why do I think the universe exists?" instead of "Why does the universe exist?". I have my tentatively preferred answer to that, but maybe you will come up with something interesting.

Comment author: byrnema 27 June 2010 10:00:44PM *  0 points [-]

A problem being unsolvable within some system does not imply that there is some outer system where it can be solved.

Agreed, I was imprecise before. It is not generally 'a problem' if something is unknown. In the case of the halting problem, it's OK if the algorithm doesn't know when it is going to halt. (This doesn't make it incomplete.) However, it is a problem if X doesn't know how X was created (this makes X incomplete.)

The difference is that an algorithm can be implemeted -- and fully aware of how it is implemented, and know every line of its own code -- without knowing where it is going to halt. Where it's going to halt isn't squirreled away in some other domain to be read at the right moment, the rules for halting are known by the algorithm, it just doesn't know when those rules will be satisfied.

In contrast, X could not have created itself without any source code to do so. The analogous situation would be an algorithm that has halted but doesn't know why it halted. If it cannot know through self-inspection why it halted, then it is incomplete: it must deduce that something outside itself caused it to halt.

Comment author: byrnema 27 June 2010 06:10:41PM 0 points [-]

I agree that when a question doesn't have any possibility of an answer, it's probably a wrong question. But in this case, I don't see how it could be a wrong question. It seems like a perfectly reasonable question that we've gotten habituated to not having an answer to. It's evidence -- if we were looking for evidence -- that X is incomplete and we are in a simulation.

We take a lot of store in the convenient fact that our reality is causal. So why can't we ask what caused reality?

I have my tentatively preferred answer to that, but maybe you will come up with something interesting.

No, I don't come up with anything. I feel like anything that a person could possibly come up with would be philosophy (a non-scientific answer outside X). But please do share your answer (even if it is philosophy, as I expect).

(By dualism, I mean that there are aspects of reality we interact with beyond science, so that physical materialism or scientism, etc., would be incomplete epistemologies.)

Comment author: ata 28 June 2010 04:40:10AM *  0 points [-]

No, I don't come up with anything. I feel like anything that a person could possibly come up with would be philosophy (a non-scientific answer outside X). But please do share your answer (even if it is philosophy, as I expect).

Here's where I stated it most recently, and I wrote an earlier post getting at the same sort of thing (where I see you posted a few comments), but at this point I've decided to abstain from actually advocating it until I have a better handle on some of the currently-unanswered questions raised by it. At the same time, I do feel like this line of reasoning (the conclusion I like to sum up as "Existence is what mathematical possibility feels like from the inside") is a step in the right direction. I do realize now that it is not as complete a solution as I originally thought — it makes me feel less confused about existence, but newly confused about other things — but I do still have the sense that the ultimately correct explanation of existence will not specially privilege this reality over others, and that our mental algorithms regarding "existence" are leading us astray. That seems to be the only state of affairs that does not compel us to believe in an infinite regress of causality, which doesn't really seem to explain anything, if it even makes logical sense. In any case, although I definitely have to concede that this problem is not solved, I am not convinced that it is not solvable. Metaphysical cosmology has been one of the most difficult areas of philosophy to turn into science or math, but it may yet fall.

(By dualism, I mean that there are aspects of reality we interact with beyond science, so that physical materialism or scientism, etc., would be incomplete epistemologies.)

Alright, that's what threw me off. I think "dualism" is usually used to refer specifically to theories that postulate ontologically-basic mental substances or properties separate from normal physical interactions; not that "there are aspects of reality we interact with beyond science", but that our consciousness or minds are made of something beyond science. Your reasoning does not imply the latter, correct?

Comment author: byrnema 28 June 2010 04:54:12PM 0 points [-]

Oh, that was you. I think the Ultimate Ensemble idea is really appealing as an explanation of what existence is. (The way possibility feels from the inside, as you wrote.)

Comment author: Blueberry 27 June 2010 11:08:39AM 1 point [-]

Ask "Why do I think the universe exists?" instead of "Why does the universe exist?". I have my tentatively preferred answer to that

What is it?

Comment author: Ruddiger 06 June 2010 02:55:28AM 1 point [-]

In Harry Potter and the Methods of Rationality, Quirrell talks about a list of the thirty-seven things he would never do as a Dark Lord.

Eliezer, do you have a full list of 37 things you would never do as a Dark Lord and what's on it?

  1. I will not go around provoking strong, vicious enemies.
  2. Don't Brag
  3. ?
Comment author: RichardKennaway 07 June 2010 09:28:31AM *  2 points [-]

All of the replies to this should be in the thread for discussing HP&tMoR.

Comment author: JoshuaZ 06 June 2010 03:27:35AM 1 point [-]

This is a reference to the Evil Overlord List. That's why Harry starts snickering. Indeed, it almost is implied that Voldemort wrote the actual evil overlord list. For the most common version of the actual Evil Overlord List see Peter's Evil Overlord List. Having such a list for Voldemort seems to be at least partially just rule of funny.

Comment author: MBlume 06 June 2010 04:30:50PM 3 points [-]

Did the evil overlord list exist publicly in 1991? I was actually a bit confused by Harry's laughter here. Eliezer seems to be working pretty hard to keep things actually in 1991 (truth and beauty, the journal of irreproducible results, etc.)

Comment author: JoshuaZ 06 June 2010 04:59:46PM 1 point [-]

That's a good point. I'm pretty sure the Evil Overlord List didn't exist that far back, at least not publicly. It seems like for references to other fictional or nerd-culture elements he's willing to monkey around with time. Thus for example, there was a Professor Summers for Defense Against the Dark Arts which wouldn't fit with the standard chronology for Buffy at all.

Comment author: NancyLebovitz 06 June 2010 05:22:03PM 3 points [-]

Checking wikipedia, it looks possible but not likely that Harry could have seen the list in 1991.

Comment author: Blueberry 06 June 2010 06:11:59PM 1 point [-]

Well, he and his father are described as being huge science fiction fans, so it's not that unlikely that they heard about the list at conventions, or had someone show them an early version of the list printed from email discussions, even if they didn't have Internet access back then.

Comment author: NancyLebovitz 06 June 2010 06:43:34PM 0 points [-]

I'm pretty sure they did have internet access back then. It was more available through universities than it was to the general public.

Comment author: Blueberry 07 June 2010 12:49:24AM 1 point [-]

I meant even if Harry's parents didn't have access back then, someone could still have printed out the list and showed it to them.

Comment author: RomanDavis 07 June 2010 08:46:02AM 1 point [-]

That doesn't sound very rational. The simplest answer seems to be, "Eliezer thought it would be funny" and he would have included the Evil Overlord List in the fanfic even if the Evil Overlord he was talking about was Caligula.

Comment author: Blueberry 09 June 2010 07:53:09PM *  0 points [-]

Of course it was included because Eliezer thought it would be funny. But I don't see what's so irrational about Harry reading the printed copy of the list.

Comment author: Oscar_Cunningham 06 June 2010 04:51:41PM 0 points [-]

Good call, although the fic doesn't explicitly mention the evil overlord list.

Comment author: RomanDavis 06 June 2010 03:35:54AM 2 points [-]

The reason I think it might actually be plot relevant is that most people can't resist making a list that is much longer than 37 rules long. Plus most of the rules are just lampshades for tropes that show up again and again in fiction with evil overlords. They rarely are such basic, practical advice as "stop bragging so much."

Comment author: JoshuaZ 06 June 2010 03:52:42AM *  16 points [-]

Ah. I'm pretty sure it isn't a real list because of the number 37. 37 is one of the most common numbers for people to pick when they want to pick a small "random" number. Humans in general are very bad at random number generation. More specifically, they are more likely to pick an odd number, and given a specific range of the form 1 to n, they are most likely to pick a number that is around 3n/4. The really clear examples are from 1 to 4 (around 40% pick 3), 1 to 10 (I don't remember the exact number but I think it is around 30% that pick 7). and then 1 to 50 where a very large percentage will pick 37. The upshot is if you ever see an incomplete list claiming to have 37 items, you should assign a high probability that the rest of the list doesn't exist.

Comment author: Oscar_Cunningham 06 June 2010 01:18:02PM *  1 point [-]

It just occurred to me that the odd/even bias applies only because we work in base ten. Humans working in a prime base (like base 11) would be much less biased. (in this respect)

Comment author: JoshuaZ 06 June 2010 05:16:34PM *  0 points [-]

Well, that seems plausible, although what is going on there is being divisible by 2, not being prime. If your general hypothesis is correct, then if we used a base 9 system numbers divisible by 3 might seem off. However, I'm not aware of any bias against numbers divisible by 5. And there's some evidence that suggests that parity is ingrained human thinking (children can much more easily grasp the notion of whether a number is even or odd, and can do basic arithmetic with even/oddness much faster than with higher moduli).

Comment author: Oscar_Cunningham 06 June 2010 05:36:25PM *  2 points [-]

I seared for "human random number" in Google and three of the results were polls on internet fora. Polls A & C were numbers in the range 1 to 10, poll B was in the range 1 to 20. C had the best participation. (By coincidence, I had participated in poll B)

I screwed up my experimental design by not thinking of a test before I looked at the results, so if anyone else wants to judge these they should think up a measure of whether certain numbers are preferred before they follow the links.

A B C

(You have a double post btw)

Comment author: RobinZ 07 June 2010 12:39:58PM 1 point [-]

JoshuaZ's statement implies a peak near 15 for B and outright states 30% of responses to A and C near 7. I would guess that 13 and 17 would be higher than 15 for B and that 7 will still be prominent, and that odd numbers (and, specifically, primes) will be disproportionately represented.

I will not edit this comment after posting.

Comment author: Blueberry 07 June 2010 05:11:07PM 1 point [-]

Why primes?

Comment author: RobinZ 07 June 2010 06:49:27PM 3 points [-]

My instinct is that numbers with obvious factors (even numbers and multiples of five especially) will appear less random - and in the range from 1 to 20, that's all the composites.

Comment author: Eliezer_Yudkowsky 06 June 2010 01:06:48PM 6 points [-]

Ouch. I am burned.

Comment author: JoshuaZ 06 June 2010 08:43:24PM 2 points [-]

Well, that's ok. Because I just wrote a review of Chapter 23 criticizing Harry's rush to conclude that magic is a single-allele Mendellian trait and then read your chapter notes where you say the same thing. That should make us even.

Comment author: RomanDavis 06 June 2010 03:08:34AM 0 points [-]

I have a feeling they are ammunition in Chekov's Gun, and and therefore any attempts to get more data will lead to spoilers.

Comment author: DanArmak 05 June 2010 07:53:39PM *  1 point [-]

What does 'consciousness' mean?

I'm having an email conversation with a friend about Nick Bostrom's simulation argument and we're now trying to figure out what the word "consciousness" means in the first place.

People here use the C-word a lot, so it must mean something important. Unfortunately I'm not convinced it means the same thing for all of us. What does the theory that "X is conscious" predict? If we encounter an alien, what would knowing that it was "conscious" or "not conscious" tell us? How about if we encountered an android that looked and behaved identically to a human, but inside its head had a very different physical implementation? What would saying it was "conscious" or "not conscious" mean?

And, what does this have to do with my personal subjective experience? It's the foundation (or medium) of everything I know or believe; but most definitions of what it is tend to be dualism-like in that, once again, saying someone else has or doesn't have subjective experience tells us nothing about the physical world.

Help appreciated!

Comment author: RichardKennaway 07 June 2010 10:23:35AM 0 points [-]

What I mean by "consciousness" is my sensation of my own presence. Googling for definitions of "conscious" and "consciousness" gives mostly similar forms of words, so that concept would appear to be what is generally understood by the words.

Do philosophers have some other specific generally understood convention of exactly what they mean by these words?

Comment author: DanArmak 07 June 2010 02:36:38PM 0 points [-]

What exactly do you mean by 'sensation'? Does it have to do with "subjective experience" and "qualia", or just the bare fact that you're modeling yourself as part of the world, like RomanDavis and Blueberry's definitions?

Comment author: RichardKennaway 07 June 2010 03:21:46PM *  2 points [-]

By "sensation" I mean the subjective experience.

If you ask me what I mean by "subjective" and "experience", well, you could follow such a train of questions indefinitely and eventually I would have no answer. But what would that prove? You're not asking for a theory of how consciousness works, but a description of the thing that such a theory would be a theory of.

Ask someone five centuries ago what they mean by "water" and all they'll be able to say is something like "the wet stuff that flows in rivers and falls from the sky". And you can ask them what they mean by "rivers" and "sky", but to what end? All you're likely to get if you press the matter is some bad science about the four elements.

"Consciousness" is in a similar state. I have an experience I label with that word, but I can't tell you how that experience happens.

Comment author: DanArmak 07 June 2010 03:45:21PM 1 point [-]

That's great - I use the word in the same way. As far as I can tell, some other people don't - see the comments by RomanDavis and Blueberry that I linked to. This confusion over the meaning of the word is what I wanted to highlight.

The way that some others use the word (to mean "an agent that models itself" or "an agent that perceives itself"), either they have successfully dissolved the question of what subjective experience is, or I don't understand them correctly, or indeed different people use the word to mean different things.

And the reason I started out talking about that is that I've seen this cause confusion both on LW and elsewhere.

Comment author: RomanDavis 05 June 2010 08:10:33PM *  0 points [-]

There are a lot of hypotheses floating around.

Mine is:

We have awareness. That is, we observe things in the territory with our senses, and include them in our map of the territory. The phenomenon we observe as consciousness is just our ability to include ourselves (our own minds, and some of it's inner sensations) in the territory.

Some people think there are things you can only know if you experience them yourself. In theory, you could run a decent simulation of what it's like to be a bat, but you would still have memories of being human, and therefore awareness of bat territory wouldn't be enough.

My solution: implant memories, including bat memories of not having human memories, into yourself. In theory, this should work.

Comment author: DanArmak 05 June 2010 08:41:30PM 1 point [-]

There are a lot of hypotheses floating around.

I hope you don't mean you're hypothesizing what the word "consciousness" means; rather, your hypotheses are alternate predictions about physical unknowns or about the future. Which is it?

I'm asking what the definition, the meaning, of the word consciousness is. Hypothesizing what a word means feels like the wrong way to do things. Well, unless we're hypothesizing what other people mean when they say "consciousness". But if we're using the word here at LW we shouldn't need to hypothesize, we can all just tell one another what we mean...

The phenomenon we observe as consciousness is just our ability to include ourselves (our own minds, and some of it's inner sensations) in the territory.

Under that definition, any agent that models the world and includes its own behavior in the model (and any good general model will do that) - is called conscious. (I would call that self-modeling or self-aware.)

So any moderately intelligent, effective agent - like my hypothetical aliens and androids - would be called conscious.

That's a fine definition, but if everyone thought that, there would be no place for arguments about whether it's possible for zombies (let alone p-zombies) to exist. It doesn't seem to me that people see consciousness as meaning merely self-modeling.

Comment author: RomanDavis 05 June 2010 08:45:33PM 0 points [-]

That's a fine definition, but if everyone thought that, there would be no place for arguments about whether it's possible for zombies (let alone p-zombies) to exist. It doesn't seem to me that people see consciousness as meaning merely self-modeling.

I think consensus here is that the idea of P Zombies is silly.

Comment author: DanArmak 05 June 2010 08:52:33PM *  0 points [-]

Certainly. But is the idea of ordinary zombies also silly? That's what your definition implies.

ETA: not that I'm against that conclusion. It would make things so much simpler :-) I just have the experience that many people mean something else by "consciousness", something that would allow for zombies.

Comment author: RomanDavis 05 June 2010 09:15:29PM 0 points [-]

What's the difference?

Comment author: DanArmak 05 June 2010 09:28:23PM *  0 points [-]

If you define "consciousness" in a way that allows for unconscious but intelligent, even human-equivalent agents, then those are called zombies. Aliens or AIs might well turn out to be zombies. Peter Watt's vampires from Blindsight are zombies.

ETA: a p-zombie is physically identical to a conscious human, but is still unconscious. (And we agree that makes no sense). A zombie is physically different from a conscious human, and as a result is unconscious - but is capable of all the behavior that humans are capable of.

(My original comment was wrong (thanks Blueberry!) and said: The difference between a zombie and a p-zombie is that p-zombies claim to be conscious, while zombies neither claim nor believe to be conscious.)

Comment author: Jack 05 June 2010 10:10:55PM *  0 points [-]

ETA: a p-zombie is physically identical to a conscious human, but is still unconscious. (And we agree that makes no sense). A zombie is physically different from a conscious human, and as a result is unconscious - but is capable of all the behavior that humans are capable of.

Where the heck is this terminology coming from? As I learned it the 'philosophical' in "philosophical zombie" is just there to distinguish it from Romero-imagined brain-eating undead.

Comment author: Blueberry 05 June 2010 10:19:22PM 1 point [-]

Yes, but we need some other term for "unconscious human-like entity". I read one paper that used the terms "p-zombie" and "b-zombie", where the p stood for "physical" as well as "philosophical" and the b stood for "behavioral".

Comment author: Jack 05 June 2010 10:29:49PM *  0 points [-]

I'd rather call the first an n-zombie (meaning neurologically identical to a human). And, yeah, lets use b-zombie instead of zombie as all of these are varieties of philosophical zombie.

(But yes they're just words. Thanks for clarifying.)

Comment author: Blueberry 05 June 2010 09:45:28PM 2 points [-]

This is very different from my understanding of the definition of those terms, which is that p-zombies are physically identical to a conscious human, and a zombie is an unconscious human-equivalent with a physical, neurological difference.

I don't see any reason why an unconscious human-equivalent couldn't erroneously claim to be conscious, any more than an unconscious computer could print out the sentence "I am conscious."

Comment author: DanArmak 05 June 2010 09:54:07PM 1 point [-]

You're right. It's what I meant, but I see that my explanation came out wrong. I'll fix it.

I don't see any reason why an unconscious human-equivalent couldn't erroneously claim to be conscious

That's true. But the fact of the matter would be that such a zombie would be objectively wrong in its claim to be conscious.

My question is: what is being conscious defined to mean? If it's a property that is objectively present or not present and that you can be wrong about in this way, then it must be something more than a "pure subjective" experience or quale.

Comment author: torekp 06 June 2010 12:02:13AM 0 points [-]

If a subjective experience is the same event, differently described, as a neural process, you can be wrong about whether you are having it. You can also be wrong about whether you and another being share the same or similar quale, especially if you infer such similarity solely from behavioral evidence.

Even aside from physical-side-of-the-same-coin considerations, a person can be mistaken about subjective experience. A tries the new soup at the restaurant and says "it tastes just like chicken". B says, "No, it tastes like turkey." A accepts the correction (and not just that it tastes like turkey to B). The plausibility of this scenario shows that we can be mistaken about qualia. Now, admittedly, that's a long way from being mistaken about whether one has qualia at all - but to rule that possibility in or out, we have to make some verbal choices clarifying what "qualia" will mean.

Roughly speaking, I see at least two alternatives for understanding "qualia". One would be to trot out a laundry list of human subjective feels: color sensations, pain, pleasure, tastes, etc., and then say "this kind of thing". That leaves the possibility of zombies wide open, since intelligent behavior is no guarantee of a particular familiar mental mechanism causing that behavior. (Compare: I see a car driving down the road, doing all the things an internal combustion engine-powered vehicle can do. That's no guarantee that internal combustion occurs within it.)

A second approach would be to define "qualia" by its role in the cognitive economy. Very roughly speaking, qualia are properties highly accessible to "executive function", which properties go beyond (are individuated more finely than by) their roles in representing, for the cognizer, the objective world. On this understanding of "qualia" zombies might be impossible - I'm not sure.

Comment author: Blueberry 05 June 2010 10:14:15PM *  0 points [-]

But the fact of the matter would be that such a zombie would be objectively wrong in its claim to be conscious.

Well, the claim would be objectively incorrect; I'm not sure it's meaningful to say that the zombie would be wrong.

My question is: what is being conscious defined to mean? If it's a property that is objectively present or not present and that you can be wrong about in this way, then it must be something more than a "pure subjective" experience or quale.

As others have commented, it's having the capacity to model oneself and one's perceptions of the world. If p-zombies are impossible, which they are, there are no "pure subjective" experiences: any entity's subjective experience corresponds to some objective feature of its brain or programming.

Comment author: Vladimir_Nesov 05 June 2010 09:25:58PM 0 points [-]

P-zombies can write philosophical papers on p-zombies.

Comment author: RomanDavis 05 June 2010 09:28:13PM 0 points [-]

Oh, P Zombies are just the reductio ad absurdum version? Yeah, I don't believe in Zombies.

Comment author: JoshuaZ 05 June 2010 09:31:59PM 0 points [-]

P-zombies aren't just reducio ad absurda although most of LW does consider them to be. David Chalmers, who is a very respected philosopher takes the idea quite seriously as do a surprisingly large number of other philosophers.

Comment author: RomanDavis 05 June 2010 09:35:50PM 0 points [-]

Please explain to me how it is not.

You can't just say, "This smart guy takes this very seriously." Aristotle took a lot of things very seriously that turned out to be nonsense.

Comment author: Briareos 05 June 2010 03:44:40PM *  11 points [-]

I think my only other comment here has been "Hi." But, the webcomic SMBC has a treatment of the prisoner's dilemma today and I thought of you guys.

Comment author: Alexandros 05 June 2010 11:08:52AM *  3 points [-]

Guided by Parasites: Toxoplasma Modified Humans

a ~20 minute (absolutely worth every minute) interview with, Dr. Robert Sapolsky, a leading researcher in the study of Toxoplasma & its effects on humans. This is a must see. Also, towards the end there is discussion of the effect of stress on telomere shortening. Fascinating stuff.

Comment author: NancyLebovitz 05 June 2010 06:18:34PM 2 points [-]

Thanks for the link.

If people's desires are influenced by parasites, what does that do to CEV?

Comment author: Blueberry 05 June 2010 06:29:00PM 5 points [-]

If your desires are influenced by parasites, then the parasites are part of what makes you you. You may as well ask "If people's desires are influenced by their past experience, what does that do to CEV?" or "If people's desires are influenced by their brain chemistry, what does that do to CEV?"

Comment author: Alexandros 05 June 2010 07:22:49PM 7 points [-]

So what if Dr. Evil releases a parasite that rewires humanity's brains in a predetermined manner? Should CEV take that into account or should it aim to become Coherent Extrapolated Disinfected Volition?

Comment author: cupholder 05 June 2010 08:19:39PM 5 points [-]

What if Dr. Evil publishes a book or makes a movie that rewires humanity's brains in a predetermined manner?

Comment author: Alexandros 05 June 2010 08:31:15PM *  2 points [-]

Yep, I made a reference to cultural influence here. That's why I suspect CEV should be applied uniformly to the identity-space of all possible humans rather than the subset of humans that happen to exist when it gets applied. In that case defining humanity becomes very, very important.

Of course, perhaps the current formulation of CEV covers the entire identity-space equally and treats the living population as a sample, and I have misunderstood. But if that is the case, Wei Dai's last article is also bunk, and I trust him to have better understanding of all things FAI than myself.

Comment author: cupholder 05 June 2010 09:25:11PM 3 points [-]

Heh - my first instinct is to bite the bullet and apply CEV to existing humans only. I couldn't give a strong argument for that, though; I just can't immediately think of a reason to exclude non-culturally influenced humans while including culturally influenced humans.

Comment author: NancyLebovitz 06 June 2010 01:15:23AM 2 points [-]

It's hard to tell what counts as an influence and what doesn't.

It would be interesting to see what would happen if the effects of parasites could be identified and reversed. The results wouldn't necessarily all be good, though.

Comment author: Alexandros 05 June 2010 09:53:21PM 0 points [-]

I am not sure I follow your last sentence. Can you elaborate?

Comment author: cupholder 05 June 2010 10:28:09PM 2 points [-]

I'll give it a try. A human's mind and preferences might be influenced by cultural things like books and TV, and they might be influenced by non-cultural things like parasites. (And of course a lot of people will be influenced by both.) I can't think of a reason to include the former in CEV and exclude the latter that feels non-arbitrary to me, so I don't feel as if parasitically modified brains warrant different treatment, such as altering CEV to cover the space of all possible humans. My gut evaluates the prospect of parasite-driven brains as just another kind of human brain. (I'm presuming as well that CEV as currently formulated is just meant to cover existing humans, not all possible humans.) That makes me content to apply CEV to existing humans only - I don't feel I have to try to account for brain changes due to culture or parasites or what have you by expanding it to incorporate all of brain space.

Comment author: Blueberry 05 June 2010 08:13:05PM *  3 points [-]

You may as well ask: "What if Dr. Evil kills every other living organism? Should CEV take that into account or should it aim to become Coherent Extrapolated Resurrected Volition?"

Of course, if someone modifies or kills all the other humans, that will change the result of CEV. Garbage in, garbage out.

Comment author: NancyLebovitz 05 June 2010 09:25:32AM 4 points [-]

The Unreasonable Effectiveness of My Self-Exploration by Seth Roberts.

This is an overview of his self-experiments (to improve his mood and sleep, and to lose weight), with arguments that self-experimentation, especially on the brain, is remarkably effective in finding useful, implausible, low-cost improvements in quality of life, while institutional science is not.

There's a lot about status and science (it took Roberts 10 years to start getting results, and it's just to risky to careers for scientists to take on projects which last that long), and some intriguing theory at the end that activities can be classified into exploitation (low risk, low reward) and exploration (high risk, high reward), and that people aren't apt to want to do exploration full time, so, if given a job that's full-time exploration (like institutional science), they'll turn most of it into exploitation.

Comment author: Nisan 04 June 2010 09:18:51AM 8 points [-]

Searle has some weird beliefs about consciousness. Here is his description of a "Fading Qualia" thought experiment, where your neurons are replaced, one by one, with electronics:

... as the silicon is progressively implanted into your dwindling brain, you find that the area of your conscious experience is shrinking, but that this shows no effect on your external behavior. You find, to your total amazement, that you are indeed losing control of your external behavior. You find, for example, that when the doctors test your vision, you hear them say, ‘‘We are holding up a red object in front of you; please tell us what you see.’’ You want to cry out, ‘‘I can’t see anything. I’m going totally blind.’’ But you hear your voice saying in a way that is completely out of your control, ‘‘I see a red object in front of me.’’

(J.R. Searle, The rediscovery of the mind, 1992, p. 66, quoted by Nick Bostrom here.)

This nightmarish passage made me really understand why the more imaginative people who do not subscribe to a computational theory of mind are afraid of uploading.

My main criticism of this story would be: What does Searle think is the physical manifestation of those panicked, helpless thoughts?

Comment author: Vladimir_M 06 June 2010 12:32:45AM *  7 points [-]

David Chalmers discusses this particular passage by Searle extensively in his paper "Absent Qualia, Fading Qualia, Dancing Qualia":
http://consc.net/papers/qualia.html

He demonstrates very convincingly that Searle's view is incoherent except under the assumption of strong dualism, using an argument based on more or less the same basic idea as your objection.

Comment author: DanArmak 04 June 2010 06:26:22PM *  9 points [-]

I don't have Searle's book, and may be missing some relevant context. Does Searle believe normal humans with unmodified brains can consciously affect their external behavior?

If yes, then there's a simple solution to this fear: do the experiment he describes, and then gradually return the test subject to his original, all-biological condition. Ask him to describe his experience. If he reports (now that he's free of non-biological computing substrate) that he actually lost his sight and then regained it, then we'll know Searle is right, and we won't upload. Nothing for Searle to fear.

But if, as I gather, Searle believes that our "consciousness" only experiences things and is never a cause of external behavior, then this is subject to the same criticism as Searle's support of zombies.

Namely: if Searle is right, then the reason he is giving us this warning isn't because he is conscious. Maybe in fact his consciousness is screaming inside his head, knowing that his thesis is false, but is unable to stop him from publishing his books. Maybe his consciousness is already blind, and has been blind from birth due to a rare developmental accident, and it doesn't know what words he types in his books at all. Why should we listen to him, if his words about conscious experience are not caused by conscious experience?

Comment author: torekp 06 June 2010 12:19:09AM 2 points [-]

Searle thinks that consciousness does cause behavior. In the scary story, the normal cause of behavior is supplanted, causing the outward appearance of normality. Thus, it's not that consciousness doesn't affect things, but just that its effects can be mimicked.

Nisan's criticism is devastating, and has the advantage of not requiring technological marvels to assess. I do like the elegance of your simple solution, though.

Comment author: Liron 04 June 2010 07:16:52AM 2 points [-]

What's the deal with female nymphomaniacs? Their existence seems a priori unlikely.

Comment author: RichardKennaway 07 June 2010 09:37:48AM 1 point [-]

This question reads to me like it's out of the middle of some discussion I didn't hear the beginning of. Why were "nymphomaniacs" on your mind in the first place? What do you mean by the word? I don't think I've heard it in many years, and I associate it with the sexual superstitions of a former age.

Comment author: LucasSloan 07 June 2010 02:46:05AM *  1 point [-]

female nymphomaniacs

What does the word "nymphomaniacs" mean? How do you judge someone to be sufficiently obsessed with sex to be a nymphomaniac? I think a lot of your confusion might be coming from you tendency to label people with this word with such negative connotations.

Does the question "what is with women who want to have sex [five times a week*] and will undertake to get it?" resolve any of your confusion? You should expect that those women who have more sex to be more salient wrt people talking about them, so they would seem more prominent, even if only 2% of the population.

*not sure about this number, just picked one that seemed alright.

Comment author: JoshuaZ 07 June 2010 03:13:08AM *  2 points [-]

Picking a number for this seems like a really bad idea. For most modern clinical definitions of disorders what matters is whether it interferes with normal daily behavior. Even that is questionable since what constitutes interference is very hard to tell.

Societies have had very different notions of what is acceptable sexuality for both males and females. Until fairly recent homosexuality was considered a mental disorder in the US. And in the Victorian era, women were routinely diagnosed as nymphomaniacs for showing pretty minimal signs of sexuality.

Comment author: Alicorn 07 June 2010 02:55:47AM 4 points [-]

Five times a week wouldn't be remotely enough to diagnose. It has to be problematic and clinically significant.

Comment author: LucasSloan 07 June 2010 04:42:50AM 2 points [-]

I think that's kinda my point. I was attempting to point out that he's probably confusing the term "nymphomaniac" with its negative connotations, with "likes to have [vaguely defined 'a lot'] of sex."

Comment author: Blueberry 07 June 2010 06:52:01AM *  3 points [-]

"Nymphomaniac" hasn't been a clinical diagnosis for a long time. In my experience, the word is now most commonly used colloquially to mean "a woman who likes to have a lot of sex". Whether this has negative connotations depends on your attitude to sex, I suppose.

Comment author: RomanDavis 05 June 2010 04:07:02AM 2 points [-]

Then your priors are wrong. Adjust accordingly.

Comment author: Liron 05 June 2010 07:43:13AM 4 points [-]

"What's the deal with" means "What model would have generated a higher prior probability for". Noticing your confusion isn't the entire solution.

Comment author: RomanDavis 05 June 2010 08:22:43PM *  1 point [-]

I thought it was pretty clear. Sexual Dimorphism doesn't operate the way you think it does. Women with high sex drives aren't rare at all.

I have heard that, for most men and most women, the time of highest sex drive happens at very different times (much younger for men than women). This might account for the entire difference, especially if your'e getting most of your information from the culture at large. As TVTropes will tell you, Most Writers Are Male.

Comment author: Mitchell_Porter 05 June 2010 08:11:26AM *  6 points [-]

If the existing model is sexual dimorphism, with high sexual desire a male trait, you could simply suppose that it's a "leaky" dimorphism, in which the sex-linked traits nonetheless show up in the other sex with some frequency. In humans this should especially be possible with male traits which depend not on the Y chromosome, but rather on having one X chromosome rather than two. That means that there is only one copy, rather than two, of the relevant gene, which means trait variance can be greater - in a woman, an unusual allele on one X chromosome may be diluted by a normal allele on the other X, whereas a man with an unusual X allele has no such counterbalance. But it would still be easy enough for a woman to end up with an unusual allele on both her Xs.

Also, regardless of the specific genetic mechanism, human dimorphism is just not very extreme or absolute (compared to many other species), and forms intermediate between stereotypical male and female extremes are quite common.

Comment author: Vladimir_M 05 June 2010 02:49:37AM 3 points [-]

Their existence seems a priori unlikely.

Why?

Comment author: gwern 04 June 2010 06:10:45PM 3 points [-]

And they are accordingly rare, are they not?

Comment author: Blueberry 05 June 2010 02:36:02AM 3 points [-]

No, women with a high sex drive are not rare.

Comment author: Liron 05 June 2010 01:01:44AM 0 points [-]

Maybe. I don't know.

Comment author: gwern 03 June 2010 06:41:41PM 10 points [-]

http://www.kk.org/quantifiedself/2010/05/eric-boyd-and-his-haptic-compa.php

'Here is Eric Boyd's talk about the device he built called North Paw - a haptic compass anklet that continuously vibrates in the direction of North. It's a project of Sensebridge, a group of hackers that are trying to "make the invisible visible".'

The technology itself is pretty interesting; see also http://www.wired.com/wired/archive/15.04/esp.html

Comment author: aausch 03 June 2010 04:49:31PM 0 points [-]

I sometimes look at human conscious thought as software which is running on partially re-programmable hardware.

The hardware can be reprogrammed by two actors - the conscious one, mostly indirectly, and the unconscious one, which seems to have direct access to the wiring of the whole mechanism (including the bits that represent the conscious actor).

I haven't yet seen a coherent discussion of this kind of model - maybe it exists and I'm missing it. Is there already a coherent discussion of this point of view on this site, or somewhere else?

Comment author: pjeby 03 June 2010 11:01:20PM 0 points [-]

Is there already a coherent discussion of this point of view on this site, or somewhere else?

It's a little old, but there's always The Multiple Self.

Comment author: Jordan 03 June 2010 11:00:16PM 1 point [-]

I look at conscious thought like a person trying to simultaneously ride multiple animals. Each animal can manage itself, if left to it's own devices it'll keep on walking in some direction, perhaps even a good one. The rider can devote different levels of attention to any given animal, but his level of control bottoms out at some point: he can't control the muscles of the animals, only the trajectory (and not always this).

One animal might be vision: it'll go on recognizing and paying attention to things unspurred, but the rider can rein the animal in and make it focus on one particular object, or even one point on that object.

The animals all interact with each other, and sometimes it's impossible to control one after being incited by another. And of course, the rider only has so much attention to devote to the numerous beasts, and often can only wrangle one or two at time.

Some riders even have reins on themselves.

Comment author: NancyLebovitz 03 June 2010 10:46:47PM 0 points [-]

I think that's a part of PJEby's theories.

Comment author: hegemonicon 03 June 2010 03:19:51PM 0 points [-]

In the same vein as Roko's investigation of LessWrong's neurotypicalness, I'd be interested to know the spread of Myers-Briggs personality types that we have here. I'd guess that we have a much higher proportion of INTPs than the general population.

Online Myers-Briggs test can be found here, though I'm not sure how accurate it is

Comment author: [deleted] 05 June 2010 09:41:16AM *  2 points [-]

del

Comment author: AdeleneDawner 06 June 2010 01:35:51AM 0 points [-]

http://lesswrong.com/lw/2a5/on_enjoying_disagreeable_company/22ga

That's a small sample, but we actually seem to score below average on Conscientiousness. Of the 7 responses to that request, the Conscientiousness scores were 1, 1, 8, 13, 41, 41, and 58.

Comment author: mattnewport 05 June 2010 09:39:32PM 0 points [-]

I tend to score very high on openness to experience and average to low on extraversion but only average to low on conscientiousness.

Comment author: JoshuaZ 03 June 2010 03:33:55PM *  1 point [-]

There are a lot of problems with Myers-Briggs. For example, the test doesn't account for people saying things because they are considered socially good traits. Claims that Myers-Briggs is accurate seem often to be connected to the Forer effect. A paper which discusses these issues is Boyle's "Myers-Briggs Type Indicator (MBTI): Some psychometric limitations", 1995 Australian Psychologist 30, 71–74.

Comment author: xamdam 03 June 2010 01:51:49PM *  1 point [-]

First I'd like to point out a good interview with Ray Kurzweil, which I found more enjoyable than a lot of his monotonous talks. http://www.motherboard.tv/2009/7/14/singularity-of-ray-kurzweil

As a follow-up, I am curious anyone attempted to mathematically model Ray's biggest and most disputed claim, which is the acceleration rate of technology. Most dispute the claim by pointing out that the data points are somewhat arbitrary and invoke data dredging. It would be interesting if the claim was based on a more of a model basis rather than basically a regression. I imagine a model that would represent the entire human society (including our technology) as an information processing machine and would argue that the processing capability gets better by X% after a (rather artificial) 'cycle', contributing to the next cycle.

Comment author: JoshuaZ 03 June 2010 01:59:26PM *  1 point [-]

Note that Kurzweil's responded to the data dredging complaint by taking major lists compiled by other people, combining them and showing that they fit a roughly exponential graph. (I don't have a citation for this unfortunately).

Edit: I'm not aware of anyone making a model of the sort you envision but it seems to suffer they same problem that Kurzweil has in general which is a potential overemphasis on information processing ability.

Comment author: xamdam 03 June 2010 02:16:09PM 0 points [-]

Why is basing this argument on information processing bad?

Comment author: JoshuaZ 03 June 2010 02:27:34PM *  2 points [-]

Information processing isn't the whole story of what we care about. For example, the amount of energy available to societies and the per a capita energy availability both matter. (In fairness, Kurzweil has discussed both of these albeit not as extensively as information issues).

Another obvious metric to look at is average lifespan. This is one where one doesn't get an exponential curve. Now, if you assert that most humans will live to at least 50 and so look at life span - 50 in major countries over the last hundred years, then the data starts to look slightly more promising, but Kurzweil's never discussed this as far as I'm aware because he hasn't discussed lifespan issues much at all, except in the most obvious fasion. You can modify the data in other ways also. One of my preferred metrics looks at the average lifespan of people who survive past age 3 (this helps deal with the fact that we've done a lot more to handle infant mortality than we have to actually extend lifespan on the upper end). And when you do this, most gains of lifespan go away.

Comment author: xamdam 03 June 2010 02:35:05PM 1 point [-]

Good points. Still I feel that basing the crux of the argument on information processing is valid, unless the other concerns you mention interfere with it at some point. Is that what you're saying?

Good observation about infant mortality; there should be an opposite metric of "% of centenarians", which would be a better measure in this context.

Comment author: JoshuaZ 03 June 2010 05:21:20PM *  2 points [-]

%Centenarians might not be a good metric given that one will get an increasing fraction of those as birth rates decline. For the US, going by the data here and here, I get a total of 1.4 *10^-4 for the fraction of the US pop that is over 100 in 1990, and a result of 1.7 * 10^-4 in 2000. But I'm not sure how accurate this data is. For example, in the first of the two links they throw out the 1970 census data as given a clearly too high number. One needs a lot more data points to see if this curve looks exponential (obviously two isn't enough), but the linked paper claims that for the foreseeable future the fraction of the pop that will be over 100 will increase by 2/3rds each decade. If that is accurate, then that means we are seeing an exponential increase.

Another metric to use might be the age of the oldest person by year of birth worldwide. That data shows a clear increasing trend, but the trend is very weak. Also, one would expect such an increase simply by increasing the general population (Edit: and better record keeping since the list includes only those with good verification), so without a fair bit of statistical crunching, it isn't clear that this data shows anything.

Comment author: JoshuaZ 03 June 2010 02:46:00PM *  1 point [-]

Well, they do interfere, for example, lifespan issues help tell us if we're actually taking advantage of the exponential growth in information processing, or for that matter if even if we are taking advantage that it actually matters. If for example information processing ability increases exponentially but the marginal difficulty in improving other things (like say lifespan) increases at a faster rate, then even with an upsurge in information processing one isn't necessarily going to see much in the way of direct improvements. Information processing is also clearly limited in use based on energy availability. If I went back to say 1950 and gave someone access to a set of black boxes that mimic modern computers, the overall rate of increase in tech won't be that high, because the information processing ability while sometimes the rate limiting step, often is not (for example, generation of new ideas and speed at which prototypes can be constructed and tested both matter). And this is even more apparent if I go further back in time. The timespan from 1900 to 1920 won't look very different with those boxes added, to a large extent because people don't know how to take advantage of their ability. So there are a lot of constraints other than just information processing and transmission capability.

Edit: Information processing might potentially work as one measure among a handful but by itself it is very crude.

Comment author: Alexandros 03 June 2010 09:08:37AM *  3 points [-]

I would have thought everyone here would have seen this by now, but I hadn't until today so it may be new to someone else as well:

Charlie Munger on the 24 Standard Causes of Human Misjudgment

http://freebsd.zaks.com/news/msg-1151459306-41182-0/

Comment author: DZS 03 June 2010 07:17:23AM 1 point [-]

I couldn't post a article due to lack of karma so I had to post here:P

I notice this site is pretty much filled with proponents of MWI, so I thought it'd be interresting to see if there are anyone on here who are actually against MWI, and if so, why?

After reading through some posts it seems the famous Probability, Preferred Basis and Relativity problems are still unsolved.

Are there any more?

Comment author: JamesPfeiffer 05 June 2010 05:56:09AM 1 point [-]

Welcome!

Here is a comment by Mitchell Porter.

http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/1csi

Comment author: torekp 06 June 2010 12:38:29AM 1 point [-]

Seconding Mitchell Porter's friendly attitude toward the Transactional Interpretation, I recommend this paper by Ruth Kastner and John Cramer.

Comment author: taw 03 June 2010 04:56:46AM 7 points [-]

I have a theory: Super-smart people don't exist, it's all due to selection bias.

It's easy to think someone is extremely smart if you've only seen the sample of their most insightful thinking. But every time that happened to me, and I found that such a promising person had a blog or something like that, it universally took very little time to find something terribly brain-hurtful they've written there.

So the null hypothesis is: there's a large population of fairly-smart-but-nothing-special people, who think and publish their thought a lot. Because the best thoughts get distributed, and average and worse thoughts don't, it's very easy from such small biased samples to believe some of them are far smarter than the rest, but their averages are pretty much the same.

(feel free to replace "smart" by "rational", the result is identical)

Comment author: dyokomizo 07 June 2010 12:59:33AM 2 points [-]

How would you describe the writing patterns of super-smart people? Similarly, how would meeting/talking/debating them would feel like?

Comment author: taw 08 June 2010 06:09:55PM 4 points [-]

I think my comment was rather vague, and people aren't sure what I meant.

This is all my impressions, as far as I can tell evidence of all that is rather underwhelming; I'm writing this more to explain my thought than to "prove" anything.

It seems to me that people come in different level of smartness. There are some people with all sort of problems that make them incapable of even human normal, but let's ignore them entirely here.

Then, there are normal people who are pretty much incapable of original highly insightful thought, critical thinking, rationality etc. They can usually do OK in normal life, and can even be quite capable in their narrow area of expertise and that's about it. They often make the most basic logic mistakes etc.

Then there are "smart" people who are capable of original insight, and don't get too stupid too often. They're not measuring example the same thing, but IQ tests are capable of distinguishing between those and the normal people reasonably well. With smart people both their top performance and their average performance is a lot better than with average people. In spite of that, all of them very often fail basic rationality for some particular domains they feel too strongly about.

Now I'm conflicted if people who are so much above "smart" as "smart" is above normal really exists. A canonical example of such person would be Feynman - from my limited information he seems to be just so ridiculously smart. Eliezer seems to believe Einstein is like that, but I have even less information about him. You can probably think of a few such other people.

Unfortunately there's a second observation - there's no reason to believe such people existed only in the past, or would have aversion to blogging - so if super-smart people exist, it's fairly certain that some blogs of such people exist. And if such blogs existed, I would expect to have found a few by now.

And yet, every time it seemed to me that someone might just be that smart and I started reading their blog - it turned out very quickly that my estimate of their smartness suffered from rapid regression to the mean. All my super-smart candidates managed to say such horrible things, and be deaf to such obvious arguments that I doubt any of them really qualifies.

So here's an alternative theory. No human alive is much smarter than the "normally smart". Of population of normally smart people, thanks to domain expertise, wit and writing skill, compatibility with my beliefs (or at least happening to avoid my red flags), higher productivity, luck etc. some people simply seem much smarter than that.

I'm not trolling here, but consider Eliezer - I've picked the example because it's well known here. For some time he was exactly such a candidate, however:

  • he is ridiculously good at writing - just look at his fanfics, biasing my perception
  • he manages to avoid many of my red flags, biasing my perception
  • he has cultural background pretty similar to mine, biasing my perception
  • his writing style is very good at avoiding unwarranted certainty - this might seem more rational, but it's really more of a style issue - people like Eliezer and Tyler Cowen who write cautiously just seem far smarter to me than people like Robin Hanson who write in "no disclaimer" style - even though I know very well that Robin is fully aware that contrarian theories he proposes are usually wrong, and there are usually other factors in addition to one he happens to write at the moment - and says that every time he's asked. Style differences bias my perception again.
  • Eliezer usually manages to avoid writing about things I know more than him about, so he usually has advantage of expertise, biasing my perception.
  • So it's safe to guess that however smart Eliezer is, I'm overestimating him - nearly all biases point in identical way.
  • On the other hand he sometimes makes ridiculously wrong statements, like his calculations of cost of cryonics which was blatantly order of magnitude off - I still don't know if this was a massive brain failure (this and other such disqualifying him as a supersmart candidate), or conscious attempt at dark arts (in which case he might still qualify, but he loses points for other reasons).

On the other hand, and this provides some counter-evidence to my theory - let's look at myself. I publish anything on my blog and in comments everywhere that seems to have expected public value higher than zero, and very often I'm in hurry / sleep-depraved, or otherwise far below my top performance. I exaggerate to get the point across very often. I write outside my area of expertise a lot, not uncommonly making severe mistakes. I'm not that good at writing (not to mention that English is not my first language) so things I say may be very unclear.

Unfortunately a normally smart person with my behaviour patterns, and a super-smart person with my behaviour patterns, would probably both fail my super-smartness test.

As you can see, I'm not even terribly convinced that my "super-smart people don't exist" theory is true. I would love to see if other people have good evidence or insight one way or the other.

Another by-the-way: Very often blatantly wrong belief might still be the least-wrong belief given someone's web of beliefs. Often it's easier to believe some minor wrong than to rebuild your whole belief system risking far more damage just to make something small come out correct. So perhaps even my test for being really really wrong is not really all that useful.

Comment author: DanielVarga 12 June 2010 10:07:40AM 0 points [-]

There is an important systematic bias you only tangentially mention in your analysis. Super-smart people (more generally, very successful people) don't feel they have to prove themselves all the time. (Especially if they are tenured. :) ) Many of them like to talk before they think. There are very smart people around them who quickly spot the obvious mistakes and laboriously complete the half-baked ideas. It is just more economic this way.

Comment author: Mitchell_Porter 12 June 2010 06:30:20AM 3 points [-]

if super-smart people exist, it's fairly certain that some blogs of such people exist. And if such blogs existed, I would expect to have found a few by now.

Why would they blog? They would already know that most people have nothing of interest to tell them; and if they want to tell other people something, they can do it through other channels. If such a person had a blog, it might be for a very narrow reason, and they would simply refrain from talking about matters guaranteed to produce nothing but time-consuming stupidity in response.

Comment author: xamdam 11 June 2010 04:30:49PM *  2 points [-]

I doubt your disproof of super-smart people, for the very same reasons you do, perhaps with a greater weight assigned to those reasons.

I am also not sure about your definition of super-smart. Is idiot savant (in math, say) super-smart? If you mean super-smart=consistently rational, I suspect nothing prevents people of normal-smart IQ from scoring (super) well there, trading off quantity of ideas for quality. There is a ceiling there as good ideas get more complex and require more processing power, but I suspect given how crazy this world is Norm Smart the Rationalist can score surprisingly highly on relative basis.

As a data point you might want to look at "Monster Minds" chapter of Feynman's "Surely you're joking". Since you mentioned Feynman. The chapter is about Einstein.

Finally, where is your blog? ;)

Comment author: taw 11 June 2010 06:21:17PM 1 point [-]
Comment author: Vladimir_Nesov 11 June 2010 06:22:46PM 2 points [-]

You can set that in "preferences".

Comment author: cousin_it 11 June 2010 12:34:13PM 5 points [-]

A few people who blog frequently and fit my criteria for "super-smart": Terence Tao, Cosma Shalizi, John Baez.

Comment author: Risto_Saarelma 12 June 2010 08:03:04AM 2 points [-]

I was thinking of Tao as well. Also, Oleg Kiselyov for programming/computer science.

Comment author: cousin_it 12 June 2010 10:11:07AM *  0 points [-]

Yep, seconding the recommendation of Oleg. I read a lot of his writings and I'd definitely have included him on the list.

Comment author: cupholder 11 June 2010 11:41:52PM 0 points [-]

Interesting picks. I hadn't thought of Cosma Shalizi as 'super-smart' before, just erudite and with a better memory for the books and papers he's read than me. Will have to think about that...

Comment author: dyokomizo 11 June 2010 10:39:37AM 3 points [-]

It doesn't seem to me that you have an accurate description of what a super-smart person would do/say other than match your beliefs and providing insightful thought. For example, do you expect super-smart people to be proficient in most areas of knowledge or even able to quickly grasp the foundations of different areas through super-abstraction? Would you expect them to be mostly unbiased? Your definition needs to be more objective and predictive, instead of descriptive.

Comment author: taw 11 June 2010 12:00:43PM 1 point [-]

I don't know what's the correct super-smartness cluster, so I cannot make objective predictive definition, at least yet. There's no need to suffer from physics envy here - a lot of useful knowledge has this kind of vagueness. Nobody managed to define "pornography" yet, and it's far easier concept than "super-smartness". This kind of speculation might end up with something useful with some luck (or not).

Even defining by example would be difficult. My canonical examples would be Feynman and Einstein - they seem far smarter than the "normally smart" people.

Let's say I collected a sufficiently large sample of "people who seem super-smart", got as accurate information about them as possible, and did a proper comparison between them and background of normally smart people (it's pretty easy to get good data on those, even by generic proxies like education - so I'm least worried about that) in a way that would be robust against even large number of data errors. That's about the best I can think of.

Unfortunately it will be of no use as my sample will be not random super-smart people but those super-smart people who are also sufficiently famous for me to know about them and be aware of their super-smartness. This isn't what I want to measure at all. And I cannot think of any reasonable way to separate these.

So the project is most likely doomed. It was interesting to think about this anyway.

Comment author: CronoDAS 09 June 2010 04:57:04AM 4 points [-]

Then, there are normal people who are pretty much incapable of original highly insightful thought, critical thinking, rationality etc. They can usually do OK in normal life, and can even be quite capable in their narrow area of expertise and that's about it. They often make the most basic logic mistakes etc.

I think you're giving the "normal person" too little credit.

Comment author: NancyLebovitz 09 June 2010 08:50:51AM 3 points [-]

Agreed. If nothing else, refugee situations aren't that uncommon in human history, and the majority are able to migrate and adapt if they're physically permitted to do so.

Comment author: cupholder 08 June 2010 08:58:16PM 1 point [-]

So here's an alternative theory. No human alive is much smarter than the "normally smart".

Reminds me of 'My Childhood Role Model'.

As for the actual meat of your comment, I don't have much to add. 'Smart' is a slippery enough word that I'd guess one's belief in 'super-smart people' depends on how one defines 'smart.'

Comment author: JoshuaZ 08 June 2010 06:27:54PM 3 points [-]

I'm not sure that the ability to have original thoughts is at all closely connected to the ability to think rationally. What makes you reach that conclusion?

Unfortunately there's a second observation - there's no reason to believe such people existed only in the past, or would have aversion to blogging - so if super-smart people exist, it's fairly certain that some blogs of such people exist. And if such blogs existed, I would expect to have found a few by now.

Have you tried looking at Terence Tao's blog? I think he fits your model, but it may be that many of his posts will be too technical for a non-mathematician. I'm not sure in general if blogging is a good medium for actually finding this sort of thing. It is easy to see if a blogger isn't very smart. it isn't clear to me that it is a medium that allows one to easily tell if someone is very smart.

Comment author: Jack 05 June 2010 09:44:23AM 0 points [-]

Have you never had an in-person conversation with a super-smart person?

Also, hi folks, I'm back. It is surprisingly difficult to dive back into LW after leaving it for a few weeks.

Comment author: taw 05 June 2010 02:32:45PM 0 points [-]

Obviously no, as I don't believe in their existence.

Comment author: Jack 05 June 2010 05:15:19PM 2 points [-]

My point is that I have trouble telling the difference between a fairly-smart and super-smart person by their writing for exactly the reason you mentioned. But in-person conversations give you access to the raw material and, if I take myself to be fairly smart there are definitely super-smart people out there. For example, I imagine if you had got to talking to Richard Feynman while he was alive you would have quickly realized he was a super-smart person.

Comment author: taw 06 June 2010 01:14:09AM 1 point [-]

I'd guess it's far far easier to fool someone in person with all the noise of primate social clues, so such information is worth a lot less than writing.

Comment author: JoshuaZ 05 June 2010 05:21:06PM 4 points [-]

I'm not sure about this. I have a lot of trouble distinguishing between just smart, super-smart, and smart-and-an-expert-in-their-field. Distinguishing them seems to not occur easily simply based on quick interactions. I can distinguish people in my own field to some extent, but if it isn't my own area, it is much more difficult. Worse, there are serious cognitive biases about intelligence estimations. People are more likely to think of someone as smart if they share interests and also more likely to think of someone as smart if they agree on issues. (Actually I don't have a citation for this one and a quick Google search doesn't turn it up, does someone else maybe have a citation for this?) One could imagine that many people might if meeting a near copy of themselves conclude that the copy was a genius. That said, I'm pretty sure that there are at least a few people out there who reasonably do qualify as super-smart. But to some extent, that's based more on their myriad accomplishments than any personal interaction.

Comment author: snarles 03 June 2010 02:45:58PM *  6 points [-]

I'm not a psychologist but I thought I could improve on the vagueness of the original discussion.

There are a few factors which determine "smartness" (or potential for success):

  1. Speed. Having faster hardware.

  2. Pattern Recognition. Being better at "chunking".

  3. Memory.

  4. Creativity. (="divergent" thinking.)

  5. Detail-awareness.

  6. Experience. Having incorporated many routines into the subconscious thanks to extensive practice.

  7. Knowledge. (Quality is more important than quantity.)

The first five traits might be considered part of someone's "talent." Experience and knowledge, which I'll group together as "training", must be gained through hard work. Potential for success is determined by a geometric (rather than additive) combination of talent and training: that is, roughly,

potential for success=talent * training

All this math, of course, is not remotely intended to be taken at face value, but it's merely the most efficient way to make my point.

The "super-smart" start life with more talent than average. The rule of the bell curve holds, so they generally do not have an overwhelming cognitive advantage over the average person. But they have enough talent to justify investing much more of their resources into training. This is because a person with 15 talent will gain 15 success for every unit of time they put into training, while a unit of training is worth 17 success for a person with 17 talent. The less time you have to spend, the more time costs, so all other things being equal, the person with more talent will put more time into training. Suppose the person with 15 talent puts 100 units of time into training, and the person with 17 talent puts 110 units of time into training. Then:

person with 15 talent * 100 training => 15000 success

person with 17 talent * 110 training => 18700 success

Which is 25% more success for only 13% more talent.

There's probably some more formal work done along these lines, I'm not an economist either.

Comment author: NancyLebovitz 03 June 2010 11:02:43AM *  6 points [-]

If you're interpreting "super-smart" to mean always right, or at least reasonable, and thus never severely wrong-headed, I think you're correct that no one like that exists, but it seems like a rather comic bookish idea of super-smartness.

Also, I have no idea how good your judgment is about whether what you call brain-hurtful is actually ideas I'd think were egregiously wrong.

I think there are a lot of folks smart enough to be special people-- those who come up with worthwhile insights frequently.

And even if it's just a matter of generating lots of ideas and then publishing the best, recognizing the best is a worthwhile skill. It's conceivable that idea-generation and idea-recognizing are done by two people who together give the impression of one person who's smarter than either of them.

Comment author: [deleted] 03 June 2010 05:09:08AM *  15 points [-]

I was thinking something similar just today:

Some people think out loud. Some people don't. Smart people who think out loud are perceived as "witty" or "clever." You learn a lot from being around them; you can even imitate them a little bit. They're a lot of fun. Smart people who don't think out loud are perceived as "geniuses." You only ever see the finished product, never their thought processes. Everything they produce is handed down complete as if from God. They seem dumber than they are when they're quiet, and smarter than they are when you see their work, because you have no window into the way they think.

In my experience, there are far more people who don't think out loud in math than in less quantitative fields. This may be part of why math is perceived as so hard; there are all these smart people who are hard to learn from, because they only reveal the finished product and not the rough draft. Rough drafts make things look feasible. Regular smart people look like geniuses if they leave no rough drafts. There may really be people who don't need rough drafts in the way that we mundanes do -- I've heard of historical figures like that, and those really are savants -- but it's possible that some people's "genius" is overstated just because they're cagey about expressing half-formed ideas.

Comment author: NancyLebovitz 03 June 2010 11:03:58AM 3 points [-]

I Am a Strange Loop by Hofstadter may be of interest-- it's got a lot about how he thinks as well as his conclusions.

Comment author: cousin_it 03 June 2010 10:04:10AM *  8 points [-]

You may be right about math. Reading the Polymath research threads (like this one) made me aware that even Terry Tao thinks in small and well-understood steps that are just slightly better informed than those of the average mathematician.

Comment author: khafra 03 June 2010 04:56:26AM 4 points [-]

After more-or-less successfully avoiding it for most of LW's history, we've plunged headlong into mind-killer territory. I'm a little bit worried, and I'm intrigued to find out what long-time LWers, especially those who've been hesitant about venturing that direction, expect to see as a result over the next month or two.

Comment author: cousin_it 03 June 2010 10:15:39AM *  12 points [-]

It doesn't look encouraging. The discussions just don't converge, they meander all over the place and leave no crystalline residue of correct answers. (Achievement unlocked: Mixed Metaphor)

Comment author: simplicio 03 June 2010 05:59:28AM 5 points [-]

It is problematic but necessary, in my opinion. Politics IS the mind-killer, but politics DOES matter. Avoiding the topic would seem to be an admission that this rationality thing is really just a pretty toy.

But it would be nice to lay down some ground-rules.

Comment author: Matt_Duing 03 June 2010 05:17:06AM 1 point [-]

My feelings on this are mixed. I've found LW to be a refreshing refuge from such quarrels. On the other hand, without careful thought political debates reliably descend into madness quickly, and it is not as if politics is unimportant. Perhaps taking the mental techniques discussed here to other forums could improve the generally atrocious level of reasoning usually found in online political discussions, though I expect the effect would be small.

Comment author: mattnewport 03 June 2010 05:00:26AM 2 points [-]

I don't think anyone has mentioned a political party or a specific current policy debate yet. That's when things really go downhill.

Comment author: khafra 03 June 2010 04:37:47PM 3 points [-]

I think a current policy debate has potential for better results, since it would offer the potential for betting, and avoid some of the self-identification and loyalty that's hard to avoid when applying a model as simple as a political philosophy to something as complex as human culture.

Comment author: fburnaby 03 June 2010 06:50:51PM *  1 point [-]

Since we've had some discussion about additions/modifications to the site, and LW -- as I understand it -- was a originally a sort of spin-off from OB, maybe addition of a karma-based prediction market of some sort would be suitable (and very interesting).

Comment author: JoshuaZ 03 June 2010 06:53:19PM 1 point [-]

Maybe make bets of karma? That might be very interesting. It would have less bite than monetary stakes, but highly risk averse individuals might be more willing to join the system.

Comment author: fburnaby 04 June 2010 06:49:26PM *  2 points [-]

I think having such a low-stakes game to play would be beneficial not only to highly risk-averse individuals, but to anyone. It would provide a useful training ground (maybe even a competitive ladder in a rationality dojo) for anyone who wants to also play with higher stakes elsewhere.

Edit: I'm currently a mediocre programmer (and intend to become good via some practice). And while I don't participate often in the community (yet), this could be fun and educational enough that I would be willing to contribute a fairly substantial amount of labour to it. If anyone with marginally more know-how is willing to implement such an idea, let me know and I'll join up.

Comment author: mkehrt 02 June 2010 10:19:15PM 7 points [-]

Forgive me if this is beating a dead horse, or if someone brought up an equivalent problem before; I didn't see such a thing.

I went through a lot of comments on dust specks vs. torture. (It seems to me like the two sides were miscommunicating in a very specific way, which I may attempt to make clear at some point.) But now I have an example that seems to be equivalent to DSvs.T, easily understandable via my moral intuition and give the "wrong" (i.e., not purely utilitarian) answer.

Suppose I have ten people and a stick. The appropriate infinitely powerful theoretical being offers me a choice. I can hit all ten of them with a stick, or I can hit one of them nine times. "Hitting with a stick" has some constant negative utility for all the people. What do I do?

This seems to me to be exactly dust specks vs. torture scaled down to humanly intuitable scales. I think the obvious answer is to hit all the people once. Examining my intuition tells me that this is because I think the aggregation function for utility is different across different people than across one person's possible futures. Specifically, my intuition tells me to maximize across people the minimum expected utilty across an individual's future.

So, is there a name for this position?

Do people think my example is equivalent to DSvsT?

Do people get the same or different answer with this question as they do with DSvsT?

Comment author: snarles 03 June 2010 02:53:26PM *  1 point [-]

I'd analyze your question this way. Ask any one of the ten people which they would prefer: A) to get hit B) to have a 1/10th chance of getting hit 9 times.

Assuming rationality and constant disutility of getting hit, every one of them would choose B.

Comment author: Nick_Tarleton 03 June 2010 05:14:41AM *  3 points [-]

"Hitting with a stick" has some constant negative utility for all the people.

I don't think you can justifiably expect to be able to tell your brain something this self-evidently unrealistic, and have it update its intuitions accordingly.

Comment author: Unnamed 03 June 2010 03:38:03AM *  7 points [-]

DSvsT was not directly an argument for utilitarianism, it was an argument for tradeoffs and quantitative thinking and against any kind of rigid rules, sacred values, or qualitative thinking which prevents tradeoffs. For any two things, both of which have some nonzero value, there should be some point where you are willing to trade off one for the other - even if one seems wildly less important than the other (like dust specks compared to torture). Utilitarianism provides a specific answer for where that point is, but the DSvsT post didn't argue for the utilitarian answer, just that the point had to be at less than 3^^^3 dust specks. You would probably have to be convinced of utilitarianism as a theory before accepting its exact answer in this particular case.

The stick-hitting example doesn't challenge the claim about tradeoffs, since most people are willing to trade off one person getting hit multiple times with many people each getting hit once, with their choice depending on the numbers. In a stadium full of 100,000 people, for instance, it seems better for one person to get hit twice than for everyone to get hit once. Your alternative rule (maximin) doesn't allow some tradeoffs, so it leads to implausible conclusions in cases like this 100,000x1 vs. 1x2 example.

Comment author: Khoth 02 June 2010 10:36:03PM 5 points [-]

I don't think maximising the minima is what you want. Suppose your choice is to hit one person 20 times, or five people 19 times each. Unless your intuition is different from mine, you'll prefer the first option.

Comment author: RomanDavis 02 June 2010 10:30:00PM *  1 point [-]

I think the point of Dust Specks Vs Torture was scope failure. Even allowing for some sort of "negative marginal utility" once you hit a wacky number 3^^^3, it doesn't matter. .000001 negative utility point multiplied by 3^^^3 is worse than anything, because 3^^^3 is wacky huge.

For the stick example, I'd say it would have to depend on a lot of factors about human psychology and such, but I think I'd hit the one. Marginal utility tends to go down for a product, and I think that the shock of repeated blows would be less than the shock of the one against ten separate people.

I think your opinion basically is an appeal to egalitarianism, since you expect negative utility to yourself from an unfair world where one person gets something that ten other people did not, for no good or fair reason.

Comment author: NancyLebovitz 03 June 2010 01:12:29AM 2 points [-]

I think you're mistaken about the marginal utility-- being hit again after you've already been injured (especially if you're hit on the same spot) is probably going to be worse than the first blow.

Marginal disutility could plausibly work in the opposite direction from marginal utility.

Each 10% of your money that you lose impacts your quality of life more. Each 10% of money that you gain impacts your quality of life less. There might be threshold effects for both, but I think the direction is right.

Comment author: RomanDavis 03 June 2010 01:38:28AM *  1 point [-]

I was thinking more along the lines of scope failure: If some one said you were going to be hit 11 times would you really expect it to feel exactly 110% as bad as being hit ten times?

But yes, from a traditional economics point of view, your post makes a hell of a lot more sense. Upvoted.

Comment author: Blueberry 02 June 2010 10:46:43PM 1 point [-]

Marginal utility tends to go done for a product, and I think that the shock of repeated blows would be less than the shock of the one against ten separate people.

Part of the assumption of the problem was that hitting with a stick has some constant negative utility for all the people.

Comment author: Khoth 02 June 2010 11:01:32PM 0 points [-]

Part of the assumption of the problem was that hitting with a stick has some constant negative utility for all the people.

It's always hard to think about this sort of thing. I read that in the original problem, but then I ended up thinking about actual hitting people with sticks when deciding what was best. Is there anything in the archives like The True Prisoner's Dilemma but for giving an intuitive version of problems with adding utility?

Comment author: RomanDavis 02 June 2010 10:55:36PM 0 points [-]

Then it depends. If you're a utilitarian, it is still better to hit the guy nine times than to hit ten people ten times.

If you allow some ideas about the utility of equality, then things get more complicated. That's why I think most people reject the simple math that 9 < 10.

Comment author: Blueberry 02 June 2010 10:28:32PM 3 points [-]

I went through a lot of comments on dust specks vs. torture. (It seems to me like the two sides were miscommunicating in a very specific way, which I may attempt to make clear at some point.)

Oh, and I'd love to hear what you mean about this.

Comment author: Blueberry 02 June 2010 10:24:58PM 2 points [-]

There's one difference, which is that the inequality of the distribution is much more apparent in your example, because one of the options distributes the pain perfectly evenly. If you value equality of distribution as worth more than one unit of pain, it makes sense to choose the equal distribution of pain. This is similar to economic discussions about policies that lead to greater wealth, but greater economic inequality.

Comment author: SilasBarta 02 June 2010 10:11:50PM *  4 points [-]

Thought I might pass this along and file it under "failure of rationality". Sadly, this kind of thing is increasingly common -- getting deep in education debt, but not having increased earning power to service the debt, even with a degree from a respected university.

Summary: Cortney Munna, 26, went $100K into debt to get worthless degrees and is deferring payment even longer, making interest pile up further. She works in an unrelated area (photography) for $22/hour, and it doesn't sound like she has a lot of job security.

We don't find out until the end of the article that her degrees are in women's studies and religious studies.

There are much better ways to spend $100K. Twentysomethings like her are filling up the workforce. I'm worried about the future implications.

I thank my lucky stars I'm not in such a position (in the respects listed in the article -- Munna's probably better off in other respects). I didn't handle college planning as well as I could have, and I regret it to this day. But at least I didn't go deep into debt for a worthless degree.

Comment author: NancyLebovitz 03 June 2010 01:04:09AM 1 point [-]

Twentysomethings like her are filling up the workforce.

Do you mean young people with unrepayable college debt, or young people with unrepayable debt for degrees which were totally unlikely to be of any use?

Comment author: SilasBarta 03 June 2010 03:08:43AM 0 points [-]

What's the substantive difference? In both cases, the young person has taken out a debt intended to amplify earnings by more than the debt costs, but that isn't going to happen. What does it matter whether the degree was of "any use" or not? What matters is whether it was enough use to cover the debt, not simply if there exist some gain in earnings due to the debt (which there probably is, though only via signaling, not direct enhancement of human capital).

Comment author: NancyLebovitz 03 June 2010 10:37:22AM 2 points [-]

I was making a distinction between extreme bad judgment (as shown in the article) and moderately bad judgment and/or bad luck.

Your emphasis upthread seemed to be on how foolish that woman and her family were.

Comment author: Seth_Goldin 03 June 2010 12:23:49AM 1 point [-]

Arnold Kling has some thoughts about the plight of the unskilled college grad.

1 2

Comment author: SilasBarta 03 June 2010 03:28:32AM *  2 points [-]

Thanks for the links, I had missed those.

I agree with his broad points, but on many issues, I notice he often perceives a world that I don't seem to live in. For example, he says that people who can simply communicate in clear English and think clearly are in such short supply that he'd hire someone or take them on as a grad student simply for meeting that, while I haven't noticed the demand for my labor (as someone well above and beyond that) being like what that kind of shortage would imply.

Second, he seems to have this belief that the consumer credit scoring system can do no wrong. Back when I was unable to get a mortgage at prime rates due to lacking credit history despite being an ideal candidate [1], he claimed that the refusals were completely justified because I must have been irresponsible with credit (despite not having borrowed...), and he has no reason to believe my self-serving story ... even after I offered to send him my credit report and the refusals!

[1] I had no other debts, no dependents, no bad incidents on my credit report, stable work history from the largest private employer in the area, and the mortgage would be for less than 2x my income and have less than 1/6 of my gross in monthly payments. Yeah, real subprime borrower there...

Comment author: Vladimir_M 03 June 2010 05:40:19PM 10 points [-]

One reason why the behavior of corporations and other large organizations often seems so irrational from an ordinary person's perspective is that they operate in a legal minefield. Dodging the constant threats of lawsuits and regulatory penalties while still managing to do productive work and turn a profit can require policies that would make no sense at all without these artificially imposed constraints. This frequently comes off as sheer irrationality to common people, who tend to imagine that big businesses operate under a far more laissez-faire regime than they actually do.

Moreover, there is the problem of diseconomies of scale. Ordinary common-sense decision criteria -- such as e.g. looking at your life history as you describe it and concluding that, given these facts, you're likely to be a responsible borrower -- often don't scale beyond individuals and small groups. In a very large organization, decision criteria must instead be bureaucratic and formalized in a way that can be, with reasonable cost, brought under tight control to avoid widespread misbehavior. For this reason, scalable bureaucratic decision-making rules must be clear, simple, and based on strictly defined categories of easily verifiable evidence. They will inevitably end up producing at least some decisions that common-sense prudence would recognize as silly, but that's the cost of scalability.

Also, it should be noted that these two reasons are not independent. Consistent adherence to formalized bureaucratic decision-making procedures is also a powerful defense against predatory plaintiffs and regulators. If a company can produce papers with clearly spelled out rules for micromanaging its business at each level, and these rules are per se consistent with the tangle of regulations that apply to it and don't give any grounds for lawsuits, it's much more likely to get off cheaply than if its employees are given broad latitude for common-sense decision-making.

Comment author: NancyLebovitz 03 June 2010 10:54:25PM 1 point [-]

As nearly as I can figure it, people who rely on credit ratings mostly want to avoid loss, but aren't very concerned about missing chances to make good loans.

Comment author: SoullessAutomaton 03 June 2010 03:43:30AM 8 points [-]

For what it's worth, the credit score system makes a lot more sense when you realize it's not about evaluating "this person's ability to repay debt", but rather "expected profit for lending this person money at interest".

Someone who avoids carrying debt (e.g., paying interest) is not a good revenue source any more than someone who fails to pay entirely. The ideal lendee is someone who reliably and consistently makes payment with a maximal interest/principal ratio.

This is another one of those Hanson-esque "X is not about X-ing" things.

Comment author: Douglas_Knight 03 June 2010 02:59:09PM 1 point [-]

For what it's worth, the credit score system makes a lot more sense when you realize it's not about evaluating "this person's ability to repay debt", but rather "expected profit for lending this person money at interest".

Expected profit explains much behavior of credit card companies, but I don't think it helps at all with the behavior of the credit score system or mortgage lenders (Silas's example!). Nancy's answer looks much better to me (except her use of the word "also").

Comment author: NancyLebovitz 03 June 2010 10:44:05AM *  3 points [-]

I think there's also some Conservation of Thought (1) involved-- if you have a credit history to be looked at, there are Actual! Records!. If someone is just solvent and reliable and has a good job, then you have to evaluate that.

There may also be a weirdness factor if relatively few people have no debt history.

(1) Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed is partly about how a lot of what looks like tyranny when you're on the receiving end of it is motivated by the people in charge's desire to simplify your behavior enough to keep track of you and control you.

Comment author: JGWeissman 03 June 2010 10:15:10PM 5 points [-]

what looks like tyranny when you're on the receiving end of it is motivated by the people in charge's desire to simplify your behavior enough to keep track of you and control you.

Simplifying my behavior enough to keep track of me and control me is tyranny.

Comment author: SilasBarta 03 June 2010 03:04:31PM *  2 points [-]

I think there's also some Conservation of Thought (1) involved-- if you have a credit history to be looked at, there are Actual! Records!. If someone is just solvent and reliable and has a good job, then you have to evaluate that.

Except that there are records (history of paying bills, rent), it's just that the lenders won't look at them.

There may also be a weirdness factor if relatively few people have no debt history.

Maybe financial gurus should think about that before they say "stay away from credit cards entirely". It should be "You MUST get a credit card, but pay the balance." (This is another case of addictive stuff that can't addict me.)

(Please, don't bother with advice, the problem has since been solved; credit unions are run by non-idiots, it seems, and don't make the above lender errors.)

ETA: Sorry for the snarky tone; your points are valid, I just disagree about their applicability to this specific situation.

Comment author: CronoDAS 05 June 2010 07:12:37PM *  0 points [-]

::followed link::

Did you ever experience nicotine withdrawal symptoms? In people who aren't long-time smokers, they can take up to a week to appear.

Comment author: Vladimir_M 06 June 2010 12:40:15AM *  2 points [-]

For what that's worth, when I quit smoking, I didn't feel any withdrawal symptoms except being a bit nervous and irritable for a single day (and I'm not even sure if quitting was the cause, since it coincided with some stressful issues at work that could well have caused it regardless). That was after a few years of smoking something like two packs a week on average (and much more than that during holidays and other periods when I went out a lot).

From my experience, as well as what I observed from several people I know very well, most of what is nowadays widely believed about addiction is a myth.

Comment author: SilasBarta 05 June 2010 11:26:47PM 0 points [-]

No, never did. My best guess is that I didn't smoke heavily enough to get a real addiction, though I smoked enough to get the psychoactive effects.

Comment author: Kevin 05 June 2010 11:36:49PM 3 points [-]

Yes, I would think it would take around 5-10 cigarettes a day (or more) for at least a week to develop an addiction. While cigarettes (and heroin, and caffeine) are very physically addictive, it still takes sustained, moderately high use to develop a physical addiction. Most cigarette smokers describe their addictions in terms of "x packs per day".

Comment author: SilasBarta 05 June 2010 11:40:23PM 1 point [-]

Most cigarette smokers describe their addictions in terms of "x packs per day".

Okay, then I guess my case isn't informative ... I'd use the pack/year metric instead instead of the pack/day.

Comment author: Vladimir_M 03 June 2010 05:48:03PM *  7 points [-]

SilasBarta:

Except that there are records (history of paying bills, rent), it's just that the lenders won't look at them.

Well, is it really possible that lenders are so stupid that they're missing profit opportunities because such straightforward ideas don't occur to them? I would say that lacking insider information on the way they do business, the rational conclusion would be that, for whatever reasons, either they are not permitted to use these criteria, or these criteria would not be so good after all if applied on a large scale.

(See my above comment for an elaboration on this topic.)

(Please, don't bother with advice, the problem has since been solved; credit unions are run by non-idiots, it seems, and don't make the above lender errors.)

Or maybe the reason is that credit unions are operating under different legal constraints and, being smaller, they can afford to use less tightly formalized decision-making rules?

Comment author: Douglas_Knight 03 June 2010 06:09:21PM 1 point [-]

Well, is it really possible that lenders are so stupid ... not be so good after all if applied on a large scale.

These are not such different answers. Working on a large scale tends to require hiring (potentially) stupid people and giving them little flexibility.

Comment author: Vladimir_M 03 June 2010 06:22:34PM *  1 point [-]

Yes, that's certainly true. In fact, what you say is very similar to one of the points I made in my first comment in this thread (see its second paragraph).

Comment author: SilasBarta 03 June 2010 05:54:36PM *  3 points [-]

Well, is it really possible that lenders are so stupid that they're missing profit opportunities because such straightforward ideas don't occur to them? I would say that lacking insider information on the way they do business, the rational conclusion would be that, for whatever reasons, either they are not permitted to use such criteria, or such criteria would not be so good after all if applied on a large scale.

No, they do require that information to get the subprime loan; it's just that they classified me as subprime based purely on the lack of credit history, irrespective of that non-loan history. Providing that information, though required, doesn't get you back into prime territory.

Or maybe the reason is that credit unions are operating under different legal constraints and, being smaller, they can afford to use less tightly formalized decision-making rules?

Considering that in the recent financial industry crisis, the credit unions virtually never needed a bailout, while most of the large banks did, there is good support for the hypothesis of CU = non-idiot, larger banks/mortgage brokers = idiot.

(Of course, I do differ from the general subprime population in that if I see that I can only get bad terms on a mortgage, I don't accept them.)

Comment author: Vladimir_M 03 June 2010 06:16:40PM *  3 points [-]

SilasBarta:

No, they do require that information to get the subprime loan; it's just that they classified me as subprime based purely on the lack of credit history, irrespective of that non-loan history. Providing that information, though required, doesn't get you back into prime territory.

This merely means that their formal criteria for sorting out loan applicants into officially recognized categories disallow the use of this information -- which would be fully consistent with my propositions from the above comments.

Mortgage lending, especially subprime lending, has been a highly politicized issue in the U.S. for many years, and this business presents an especially dense and dangerous legal minefield. Multifarious politicians, bureaucrats, courts, and prominent activists have a stake in that game, and they have all been using whatever means are at their disposal to influence the major lenders, whether by carrots or by sticks. All this has undoubtedly influenced the rules under which loans are handed out in practice, making the bureaucratic rules and procedures of large lenders seem even more nonsensical from the common person's perspective than they would otherwise be.

(I won't get into too many specifics in order to avoid raising controversial political topics, but I think my point should be clear at least in the abstract, even if we disagree about the concrete details.)

Considering that in the recent financial industry crisis, the credit unions virtually never needed a bailout, while most of the large banks did, which supports the CU = idiot, larger banks/mortgage brokers = non-idiot hypothesis.

Why do you assume that the bailouts are indicative of idiocy? You seem to be assuming that -- roughly speaking -- the major financiers have been engaged in more or less regular market-economy business and done a bad job due to stupidity and incompetence. That, however, is a highly inaccurate model of how the modern financial industry operates and its relationship with various branches of the government -- inaccurate to the point of uselessness.

Comment author: SilasBarta 03 June 2010 06:27:26PM *  1 point [-]

I actually agree with most of those points, and I've made many such criticisms myself. So perhaps larger banks are forced into a position where they rely too much on credit scores at one stage. Still, credit unions won, despite having much less political pull, while significantly larger banks toppled. Much as I disagree with the policies you've described, some of the banks' errors (like assumptions about repayment rates) were bad, no matter what government policy is.

If lending had really been regulated to the point of (expected) unprofitability, they could have gotten out of the business entirely, perhaps spinning off mortgage divisions as credit unions to take advantage of those laws. Instead, they used their political power to "dance with the devil", never adjusting for the resulting risks, either political or in real estate. There's stupidity in that somewhere.

Comment author: NancyLebovitz 03 June 2010 03:26:19PM 3 points [-]

Except that there are records (history of paying bills, rent), it's just that the lenders won't look at them.

Fair point. This does replicate the Conservation of Thought theme. I think a good bit about business can be explained as not bothering because one's competitors haven't bothered either.

I've seen financial gurus recommend getting a credit card and paying the balance.

And thanks for the ETA.

Comment author: mattnewport 03 June 2010 05:56:38PM 4 points [-]

I've seen financial gurus recommend getting a credit card and paying the balance.

Ramit Sethi for example. I had the impression that this was actually pretty much the standard advice from personal finance experts. Most of them are not worth listening to anyway though.

Comment author: SilasBarta 03 June 2010 10:08:23PM *  1 point [-]

This might be what they say in their books, where they give a detailed financial plan, though I doubt even that. What they advise is usually directed at the average mouthbreather who gets deep into credit card debt. They don'd need to advise such people to build a credit history by getting a credit card solely for that purpose -- that ship has already said!

All I ever hear from them is "Stay away from credit cards entirely! Those are a trap!" I had never once heard a caveat about, "oh, but make sure to get one anyway so you don't find yourself at 24 without a credit history, just pay the balance." No, for most of what they say to make sense, you have to start from the assumption that the listener typically doesn't pay the full balance, and is somehow enlightened by moving to such a policy.

Notice how the citation you give is from a chapter-length treatment from a less-known finance guru (than Ramsey, Orman, Howard, etc.), and it's about "optimizing credit cards" a kind of complex, niche strategy. Not standard, general advice from a household name.

Comment author: Blueberry 04 June 2010 01:26:04AM 1 point [-]

All I ever hear from them is "Stay away from credit cards entirely! Those are a trap!"

That would be an insanely stupid thing for anyone to say. Credit cards are very useful if used properly. I agree with mattnewport that the standard advice given in financial books is to charge a small amount every month to build up a credit rating. Also, charge large purchases at the best interest rate you can find when you'll use the purchases over time and you have a budget that will allow you to pay them off.

Comment author: Eneasz 02 June 2010 09:07:05PM 4 points [-]

Are there any rationalist psychologists?

Also, more specifically but less generally relevant to LW; as a person being pressured to make use of psychological services, are there any rationalist psychologists in the Denver, CO area?

Comment author: Kevin 06 June 2010 02:37:51AM *  1 point [-]

As a start, http://en.wikipedia.org/wiki/Cognitive_behavioral_therapy is a branch of psychotherapy with some respect around here because of the evidence that it sometimes works, compared to the other fields of psychotherapy with no evidence.

Comment author: RomanDavis 06 June 2010 03:16:14AM 1 point [-]

Do they really have such a poor track record? I know some scientists have very little respect for the "soft" sciences, but sociologist can at least make generalizations from studies done on large scales. Psychotherapy makes a lot of people incredulous, but iis it really fair to say that most methods in practice today are ~0% effective?

Yes this is essentially a post stating my incredulity. Would you mind quelling it?

Comment author: pjeby 06 June 2010 04:14:12AM 2 points [-]

Psychotherapy makes a lot of people incredulous, but iis it really fair to say that most methods in practice today are ~0% effective?

It's not that they're 0% effective, it's that they're not much more effective than placebo therapy (i.e. being put on a waiting list for therapy), or keeping a journal.

CBT is somewhat more effective, but I've also heard that it's not as effective for high-ruminators... i.e., people who already obsess about their thinking.

Comment author: AlanCrowe 06 June 2010 08:08:27PM 2 points [-]

Scientific medicine is difficult and expensive. I worry that the apparent success of CBT may be because methodological compromises needed to make the research practical happen to flatter CBT more than they flatter other approaches.

I might be worrying about the wrong thing. Do we know anything about the usefulness of Prozac in treating depression? Since we turn a blind eye to the unblinding of all our studies by the sexual side-effects of Prozac, and also refuse to consider the direct impact of those side-effects it could be argued that we don't actually have any scientific knowledge of the effectiveness of the drug.

Comment author: Douglas_Knight 06 June 2010 11:49:42AM *  0 points [-]

The claim I've seen associated with Robyn Dawes is that therapy is useful (which I read as "more useful than being on a waiting list"), but that untrained therapists are just as good as those trained under most methods. (ETA: and, contrary to Kevin, they have been tested and found wanting)

Comment author: Kevin 06 June 2010 03:44:02AM *  1 point [-]

It's not that other forms of psychotherapy are scientifically shown to be 0% effective; it's just that evidence-based psychotherapy is a surprisingly recent field. Psychotherapy can still work even if some fields of it have not had rigorous studies showing their effectiveness... but you might as well go with a therapist that has training in a field of psychotherapy that has some scientific method behind it.

http://www.mentalhelp.net/poc/view_doc.php?type=doc&id=13023&cn=5

Comment author: torekp 06 June 2010 01:00:23AM *  1 point [-]

I can't help you with the Denver area in particular, but the general answer is a definite yes. In an interesting juxtaposition, American Psychologist magazine had a recent issue prominently featuring discussion of how to get past the misuse of statistics discussed in this very LW open thread. And it's not the first time the magazine addressed the point.

Comment author: NancyLebovitz 03 June 2010 12:33:12AM 1 point [-]

Does cognitive rationalist therapy count as both rationalist and psychology for purposes of this question?

I think Learning Methods is a more sophisticated rationalist approach than CBT (it does a more meticulous job of identifying underlying thoughts), and might be worth checking into.

Comment author: pjeby 06 June 2010 04:31:14AM 2 points [-]

I think Learning Methods is a more sophisticated rationalist approach than CBT

Interesting. I found the site to be not very helpful, until I hit this page, which strongly suggests that at least one thing people are learning from this training is the practical application of the Mind Projection Fallacy:

Was the movie good or bad? If you answer BOTH, think it through. In a factual sense, can the same movie be good AND bad? If it’s good, how can it be bad? The only way to make sense of a movie being both good and bad is to realize that the goodness and badness does not exist IN the movie, but IN Jack and IN Jill as a reflection of how the movie matches their individual criteria.

The quote is from an article written by an LM student, and some insights from the learning process that helped her overcome her stage fright.

IOW, at least one aspect of LM sounds a bit like "rationality dojo" to me (in the sense that here's an ordinary person with no special interest in rationalism, giving a beautiful (and more detailed than I quoted here) explanation of the Mind Projection Fallacy, based on her practical applications of it in everyday life .

(Bias disclaimer: I might be positively inclined to what I'm reading because some of it resembles or is readily translatable to aspects of my own models. Another article that I'm in the middle of reading, for example, talks about the importance of addressing the origins of nonconsciously-triggered mental and physical reactions, vs. consciously overriding symptoms -- another approach I personally favor.)

Comment author: CronoDAS 02 June 2010 06:37:27PM 0 points [-]

Anyone here live in California? Specifically, San Diego county?

The judicial election on June 8th has been subject to a campaign by a Christian conservative group. You probably don't want them to win, and this election is traditionally a low turnout one, so you might want to put a higher priority on this judicial election than you normally would. In other words, get out there and vote!

Comment author: Seth_Goldin 02 June 2010 05:57:48PM 5 points [-]