To whom it may concern:

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

(After the critical success of part II, and the strong box office sales of part III in spite of mixed reviews, will part IV finally see the June Open Thread jump the shark?)

New to LessWrong?

New Comment
663 comments, sorted by Click to highlight new comments since: Today at 6:59 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Cleaning out my computer I found some old LW-related stuff I made for graphic editing practice. Now that we have a store and all, maybe someone here will find it useful:

You are magnificent.

(Alternate title for the LW tabloid — "The Rational Enquirer"?)

1Scott Alexander14y
That's....brilliant. I might have to do another one just for that title.
8pjeby14y
Sweet!
3cousin_it14y
Yep, it was probably the first rationalist joke ever that made me laugh.
0FourFire9y
I didn't see that until right now, made me chuckle.
4fburnaby14y
Nearly killed me.
4Unnamed14y
We have a store? Where?
5arundelo14y
Roko Mijic has a Zazzle store. (See also.)
1gaffa14y
Tabloid 100% gold. Hanson slayed me.
0Roko14y
Oh dear oh dear oh dear oh dear...
0Houshalter14y
Lol, although, what does astrology have to do with anything less wrong-ish.
2cousin_it14y
That's a reference to Three Worlds Collide.

Why is LessWrong not an Amazon affiliate? I recall buying at least one book due to it being mentioned on LessWrong, and I haven't been around here long. I can't find any reliable data on the number of active LessWrong users, but I'd guess it would number in the 1000s. Even if only 500 are active, and assuming only 1/4 buy at least one book mentioned on LessWrong, assuming a mean purchase value of $20 (books mentioned on LessWrong probably tend towards the academic, expensive side), that would work out at $375/year.

IIRC, it only took me a few minutes to sign up as an Amazon affiliate. They (stupidly) require a different account for each Amazon website, so 5*4 minutes (.com, .co.uk, .de, .fr), +20 for GeoIP database, +3-90 (wide range since coding often takes far longer than anticipated) to set up URL rewriting (and I'd be happy to code this) would give a 'worst case' scenario of $173 annualized returns per hour of work.

Now, the math is somewhat questionable, but the idea seems like a low-risk, low-investment and potentially high-return one, and I note that Metafilter and StackOverflow do this, though sadly I could not find any information on the returns they see from this. So, is there any reason why nobody has done this, or did nobody just think of it/get around to it?

2Douglas_Knight14y
From your link, a further link doesn't make it sound great at SO - 2-4x the utter failure. But they are very positive about it because the cost of implementation was very low. Just top-level posts or no geolocating would be even cheaper. You may be amused (or something) by this search
5mattnewport14y
A possibly relevant data point: I usually post any links to books I put online with my amazon affiliate link and in the last 3 months I've had around 25 clicks from links to books I believe I posted in Less Wrong comments and no conversions.

The entire world media seems to have had a mass rationality failure about the recent suicides at Foxconn. There have been 10 suicides there so far this year, at a company which employs more than 400,000 people. This is significantly lower than the base rate of suicide in China. However, everyone is up in arms about the 'rash', 'spate', 'wave'/whatever of suicides going on there.

When I first read the story I was reading a plausible explanation of what causes these suicides by a guy who's usually pretty on the ball. Partly due to the neatness of the explanation, it took me a while to realise that there was nothing to explain.

Your strength as a rationalist is your ability to be more confused by fiction than by reality. It's even harder to achieve this when the fiction comes ready-packaged with a plausible explanation (especially one which fits neatly with your political views).

That's what I thought as well, until I read this post from "Fake Steve Jobs". Not the most reliable source, obviously, but he does seem to have a point:

But, see, arguments about national averages are a smokescreen. Sure, people kill themselves all the time. But the Foxconn people all work for the same company, in the same place, and they’re all doing it in the same way, and that way happens to be a gruesome, public way that makes a spectacle of their death. They’re not pill-takers or wrist-slitters or hangers. ... They’re jumpers. And jumpers, my friends, are a different breed. Ask any cop or shrink who deals with this stuff. Jumpers want to make a statement. Jumpers are trying to tell you something.

Now I'm not entirely sure of the details, but if it's true that all the suicides in the recent cluster consisted of jumping off the Foxconn factory roof, that does seem to be more significant than just 15 employees committing suicide in unrelated incidents. In fact, it seems like it might even be the case that there are a lot more suicides than the ones we've heard about, and the cluster of 15 are just those who've killed themselves via this particular, highly visible, me... (read more)

Suicide and methods of suicide are contagious, FWIW.

keyword = "werther effect"

7CannibalSmith14y
http://en.wikipedia.org/wiki/Werther_effect
3wedrifid14y
I was surprised when I read a statistical analysis on national death rates. Whenever there was a suicide by a particular method published in newspapers or on television, deaths of that form spiked in the following weeks. This is despite the copycat deaths often being called 'accidents' (examples included crashed cars and aeroplanes). Scary stuff (or very impressive statistics-fu).
1JoshuaZ14y
Yes, this is connected to the existence of suicide epidemics. The most famous example is the ongoing suicide epidemic over the last fifty years in Micronesia, where both the causes and methods of suicide have been the same (hanging). See for example this discussion.
6Torben14y
If all the members of a cult committed suicide then the local rate is 100%. The most local rate that we so far know of is 15/400,000 which is 4x below baseline. If these 15 people worked at, say, the same plant of 1,000 workers you may have a point. But we don't know. At this point there is nothing to explain.
5kodos9614y
Fair enough - my example was poorly thought out in retrospect. But I don't think it's correct that there's nothing to explain. If it's true that all 15 committed suicide by the same method - a fairly rare method frequently used by people who are trying to make a public statement with their death - then there seems to be something needing to be explained. As Fake Steve Jobs points out later in the cited article, if 15 employees of Walmart committed suicide within the span of a few months, all of them by way of jumping off the roof of their Walmart, wouldn't you think that was odd? Don't you think that would be more significant, and more deserving of an explanation, than the same 15 Walmart employees committing suicide in a variety of locations, by a variety of different methods? I'm not committing to any particular explanation here (Douglas Knight's suggestion, for one, sounds like a plausible explanation which doesn't involve any wrongdoing on Foxconn's part), I'm just saying that I do think there's "something to explain".
0kodos9614y
Just curious: why the downvote? Was this just a case of downvote = disagree? If so, what do you disagree with specifically?
1SilasBarta14y
Strange. I thought it made a good point, so I just upvoted it.
2mattnewport14y
The first question that came to mind when I heard about this story was 'what's the base rate?'. I didn't investigate further but a quick mental estimate made me doubt that this represented a statistically significant increase above the base rate. It's disappointing yet unsurprising that few if any media reports even consider this point.
1Bo10201014y
Wasn't there a somewhat well-publicized "spate" of suicides at a large French telecom a while back? I remember the explanation being the same - the number observed was just about what you'd expect for an employer of that size. ETA: http://en.wikipedia.org/wiki/France_Telecom
3mattnewport14y
Even if the suicide rate was somewhat higher than average it still doesn't necessarily tell you much. You should really be looking at the probability of that number of suicides occurring in some distinct subset of the population - given all the subsets of a population that you can identify you will expect some to have higher than suicide rates than for the population as a whole. The relevant question is 'what is the probability that you would observe this number of suicides by chance in some randomly selected subset of this size?' Incidentally the rate appears to be below that of Cambridge University students:
1gwern14y
Yes, this is my counter-counter-criticism as well. 'Sure, the overall China rate may be the same, but what's the suicide rate for young, employed workers employed by a technical company with bright prospects? I'll bet it's lower than the overall rate...'
2SilasBarta14y
Agreed. Also, I think what got the suicides in China in the news was that the victim attributed the suicide specifically to some weird policy or rule the company adhered to. It could be that the "normal" suicides at the company are being ignored, and the ones being reported are the suicides on top of this, justifying that concern that this is abnormal.
0mattnewport14y
This was why I went looking for stats on suicides amongst university students. I remembered some talk when I was at Cambridge of a high suicide rate, which you might see as somewhat similarly counter-intuitive to a high suicide rate for 'young, employed workers employed by a technical company with bright prospects'. Actually, there are a number of reasons to expect a somewhat elevated suicide rate in a relatively high pressure environment where large numbers of young people have left home for the first time and are living in close proximity to large numbers of strangers their own age. Stories about high suicide rates at elite universities tend to take a very different tack to stories about Chinese workers however.
-4Houshalter14y
Ya, I can see how something like this could happen. By the way, a few statistics don't exactly prove anything. Was there 10 deaths last year? The year before? Do other factories have similiar problems? Etc. To many variables.
7JoshuaZ14y
Incidentally, note that the evidence strongly suggests that actively taking out your aggression actually increases rather than decreases stress and aggression levels. See for example, Berkowitz's 1970 paper "Experimental investigation of hostility catharsis" in the Journal of Consulting and Clinical Psychology.

Marginal Revolution linked to A Fine Theorem, which has summaries of papers in decision theory and other relevant econ, including the classic "agreeing to disagree" results. A paper linked there claims that the probability settled on by Aumann-agreers isn't necessarily the same one as the one they'd reach if they shared their information, which is something I'd been wondering about. In retrospect this seems obvious: if Mars and Venus only both appear in the sky when the apocalypse is near, and one agent sees Mars and the other sees Venus, then they conclude the apocalypse is near if they exchange info, but if the probabilities for Mars and Venus are symmetrical, then no matter how long they exchange probabilities they'll both conclude the other one probably saw the same planet they did. The same thing should happen in practice when two agents figure out different halves of a chain of reasoning. Do I have that right?

ETA: it seems, then, that if you're actually presented with a situation where you can communicate only by repeatedly sharing probabilities, you're better off just conveying all your info by using probabilities of 0 and 1 as Morse code or whatever.

ETA: the paper works out an example in section 4.

I thought of a simple example that illustrates the point. Suppose two people each roll a die privately. Then they are asked, what is the probability that the sum of the dice is 9?

Now if one sees a 1 or 2, he knows the probability is zero. But let's suppose both see 3-6. Then there is exactly one value for the other die that will sum to 9, so the probability is 1/6. Both players exchange this first estimate. Now curiously although they agree, it is not common knowledge that this value of 1/6 is their shared estimate. After hearing 1/6, they know that the other die is one of the four values 3-6. So actually the probability is calculated by each as 1/4, and this is now common knowledge (why?).

And of course this estimate of 1/4 is not what they would come up with if they shared their die values; they would get either 0 or 1.

Here is a remarkable variation on that puzzle. A tiny change makes it work out completely differently.

Same setup as before, two private dice rolls. This time the question is, what is the probability that the sum is either 7 or 8? Again they will simultaneously exchange probability estimates until their shared estimate is common knowledge.

I will leave it as a puzzle for now in case someone wants to work it out, but it appears to me that in this case, they will eventually agree on an accurate probability of 0 or 1. And they may go through several rounds of agreement where they nevertheless change their estimates - perhaps related to the phenomenon of "violent agreement" we often see.

Strange how this small change to the conditions gives such different results. But it's a good example of how agreement is inevitable.

2Roko14y
But in reality, what happens when people try to aumann involves a different set of problems, such as status-signalling, especially the idea that updating toward someone else's probability is instinctively seen as giving them status.
1cousin_it14y
Thanks a lot for both links. I already understood common knowledge, but the paper is a very pleasing and thorough treatment of the topic.

Observation: The may open thread, part 2, had very few posts in the last days, whereas this one has exploded within the first 24 hours of its opening. I know I deliberately withheld content from it as once it is superseded from a new thread, few would go back and look at the posts in the previous one. This would predict a slowing down of content in the open threads as the month draws to a close, and a sudden burst at the start of the next month, a distortion that is an artifact of the way we organise discussion. Does anybody else follow the same rule for their open thread postings? Is there something that should be done to solve this artificial throttling of discussion?

Some sites have gone to an every Friday open thread; maybe we should do it weekly instead of monthly, too.

2Blueberry14y
I would support that.
0RobinZ14y
From observations even of previous "Part 2"s, it would seem that there is enough content to support that frequency of open thread.
4Kaj_Sotala14y
I don't post in the open threads much, but if I run into a good rationality quote I tend to wait until the next rationality quotes thread is opened unless the current one is less than a week or so old.

I think my only other comment here has been "Hi." But, the webcomic SMBC has a treatment of the prisoner's dilemma today and I thought of you guys.

So I've started drafting the very beginnings of a business plan for a Less Wrong (book) store-ish type thingy. If anybody else is already working on something like this and is advanced enough that I should not spend my time on this mini-project, please reply to this comment or PM me. However, I would rather not be inundated with ideas as to how to operate such a store yet: I may make a Less Wrong post in the future to gather ideas. Thanks!

My theory of happiness.

In my experience, happy people tend to be more optimistic and more willing to take risks than sad people. This makes sense, because we tend to be more happy when things are generally going well for us: that is when we can afford to take risks. I speculate that the emotion of happiness has evolved for this very purpose, as a mechanism that regulates our risk aversion and makes us more willing to risk things when we have the resources to spare.

Incidentally, this would also explain why people falling in love tend to be intensly happy at first. In order to get and keep a mate, you need to be ready to take risks. Also, if happiness is correlated with resources, then being happy signals having lots of resources, increasing your prospective mate's chances of accepting you. [...]

I was previously talking with Will about the degree to which people's happiness might affect their tendency to lean towards negative or positive utilitarianism. We came to the conclusion that people who are naturally happy might favor positive utilitarianism, while naturally unhappy people might favor negative utilitarianism. If this theory of happiness is true, then that makes perfect sens

... (read more)
8Houshalter14y
How does this make sense exactly? A happy person, with more resources, would be better off not taking risks that could result in him losing what he has. On the other hand, a sad person with few resources, would need to take more risks then the happy person to get the same results. If you told a rich person, jump off that cliff and I'll give you a million dollars, they probably wouldn't do it. On the other hand, if you told a poor person the same thing, they might do it as long as there was a chance they could survive. My idea of why people were happy wasn't a static value of how many resources they had, but a comparative value. A rich person thrown into poverty would be very unhappy, but the poor person might be happy.
7pjeby14y
Kaj's hypothesis is a bit off: what he's actually talking about is the explore/exploit tradeoff. An animal in a bad (but not-yet catastrophic) situation is better off exploiting available resources than scouting new ones, since in the EEA, any "bad" situation is likely to be temporary (winter, immediate presence of a predator, etc.) and it's better to ride out the situation. OTOH, when resources are widely available, exploring is more likely to be fruitful and worthwhile. The connection to happiness and risk-taking is more tenuous. I'd be interested in seeing the results of that experiment. But "rich" and "poor" are even more loosely correlated with the variables in question - there are unhappy "rich" people and unhappy "poor" people, after all. (In other words, this is all about internal, intuitive perceptions of resource availability, not rational assessments of actual resource availability.)
2RobinZ14y
If I were to wager a guess, the people who would accept the deal are those who feel they are in a catastrophic situation. Speaking of catastrophic situations, have you seen The Wages of Fear or any of the remakes? I've only seen Sorcerer), but it was quite good. It's a rather more realistic situation that jumping off a cliff, but the structure is the same: a group of desperate people driving cases of nitroglycerin-sweating dynamite across rough terrain to get enough money that they can escape.
0Houshalter14y
Or maybe not...
1RobinZ14y
I'd buy "main road incorporating rope suspension bridges" over "millionaire hiring people to throw themselves off cliffs", but I see what you mean.
0Kaj_Sotala14y
I believe you're right, now that I think about that.
1Kaj_Sotala14y
I was kind of thinking expected value. In principle, if you always go by expected value, in the long run you will end up maximizing your value. But this may not be the best move to make if you're low on resources, because with bad luck you'll run out of them and die even though you made the moves with the highest expected value. However, your objection does make sense and Eby's reformulation of my theory is probably the superior one, now that I think about it.
8Alexandros14y
Hi Kaj, I really liked the article. I had a relevant theory to explain the perceived difference of attitudes of north Europeans versus south Europeans. I guess you could call it a theory of unhappiness. Here goes: I take as granted that mildly depressed people tend to make more accurate depictions of reality, that north Europeans have higher incidence of depression and also much better functioning economies and democracies. Given a low resource environment, one needs to plan further, and make more rational projections of the future. If being on the depressive side makes one more introspective and thoughtful, then it would be conducive to having better long-term plans. In a sense, happiness could be greed-inducing, in a greedy algorithm sense. This more or less agrees with kaj's theory. OTOH, not-happiness would encourage long-term planning and even more co-operative behaviour. In the current environment, resources may not be scarce, but our world has become much more complex, actions having much deeper consequences than in the ancestral environment (Nassim Nicholas Taleb makes this point in Black Swan) therefore also needing better thought out courses of action. So northern Europeans have lucked out where their adaptation to climate has been useful for the current reality. If one sees corruption as a local-greedy behaviour as opposed to lawfulness as a global-cooperative behaviour, this would also explain why going closer to the equator you generally see an increase in corruption and also failures in democratic government. Taken further, it would imply that near-equator peoples are simply not well-adapted to democratic rule, which demands a certain limiting of short-term individual freedom for the longer-term common good, and a more distributed/localised form of governance would do much better. I think this (rambling) theory can more or less be pieced together with kaj's, adding long-term planning as a second dimension. Disclaimer: Before anyone accuses me of dis
3Jayson_Virissimo14y
If any given instance of discrimination increases the degree of correspondence between your map and the territory, then there is no need for apology. Are these sorts of disclaimers really necessary here?
1RomanDavis14y
Relevant to your interests: http://www.youtube.com/watch?v=A3oIiH7BLmg&feature=channel
0Alexandros14y
Greatly appreciated. present-oriented vs. future oriented is a good way to put it and I suspect there is some more research I could find if I dig further behind that speech.
0Will_Newsome14y
And a very condensed note I wrote to myself (in brainstormish mode, without regard for feasibility or testability):

Searle has some weird beliefs about consciousness. Here is his description of a "Fading Qualia" thought experiment, where your neurons are replaced, one by one, with electronics:

... as the silicon is progressively implanted into your dwindling brain, you find that the area of your conscious experience is shrinking, but that this shows no effect on your external behavior. You find, to your total amazement, that you are indeed losing control of your external behavior. You find, for example, that when the doctors test your vision, you hear them say, ‘‘We are holding up a red object in front of you; please tell us what you see.’’ You want to cry out, ‘‘I can’t see anything. I’m going totally blind.’’ But you hear your voice saying in a way that is completely out of your control, ‘‘I see a red object in front of me.’’

(J.R. Searle, The rediscovery of the mind, 1992, p. 66, quoted by Nick Bostrom here.)

This nightmarish passage made me really understand why the more imaginative people who do not subscribe to a computational theory of mind are afraid of uploading.

My main criticism of this story would be: What does Searle think is the physical manifestation of those panicked, helpless thoughts?

I don't have Searle's book, and may be missing some relevant context. Does Searle believe normal humans with unmodified brains can consciously affect their external behavior?

If yes, then there's a simple solution to this fear: do the experiment he describes, and then gradually return the test subject to his original, all-biological condition. Ask him to describe his experience. If he reports (now that he's free of non-biological computing substrate) that he actually lost his sight and then regained it, then we'll know Searle is right, and we won't upload. Nothing for Searle to fear.

But if, as I gather, Searle believes that our "consciousness" only experiences things and is never a cause of external behavior, then this is subject to the same criticism as Searle's support of zombies.

Namely: if Searle is right, then the reason he is giving us this warning isn't because he is conscious. Maybe in fact his consciousness is screaming inside his head, knowing that his thesis is false, but is unable to stop him from publishing his books. Maybe his consciousness is already blind, and has been blind from birth due to a rare developmental accident, and it doesn't know what words he types in his books at all. Why should we listen to him, if his words about conscious experience are not caused by conscious experience?

2torekp14y
Searle thinks that consciousness does cause behavior. In the scary story, the normal cause of behavior is supplanted, causing the outward appearance of normality. Thus, it's not that consciousness doesn't affect things, but just that its effects can be mimicked. Nisan's criticism is devastating, and has the advantage of not requiring technological marvels to assess. I do like the elegance of your simple solution, though.
7Vladimir_M14y
David Chalmers discusses this particular passage by Searle extensively in his paper "Absent Qualia, Fading Qualia, Dancing Qualia": http://consc.net/papers/qualia.html He demonstrates very convincingly that Searle's view is incoherent except under the assumption of strong dualism, using an argument based on more or less the same basic idea as your objection.

http://www.kk.org/quantifiedself/2010/05/eric-boyd-and-his-haptic-compa.php

'Here is Eric Boyd's talk about the device he built called North Paw - a haptic compass anklet that continuously vibrates in the direction of North. It's a project of Sensebridge, a group of hackers that are trying to "make the invisible visible".'

The technology itself is pretty interesting; see also http://www.wired.com/wired/archive/15.04/esp.html

To the powers that be: Is there a way for the community to have some insight into the analytics of LW? That could range from periodic reports, to selective access, to open access. There may be a good reason why not, but I can't think of it. Beyond generic transparency brownie points, since we are a community interested in popularising the website, access to analytics may produce good, unforeseen insights. Also, authors would be able to see viewership of their articles, and related keyword searches, and so be better able to adapt their writing to the audience. For me, a downside of posting here instead of my own blog is the inability to access analytics. Obviously i still post here, but this is a downside that may not have to exist.

LW too focused on verbalizable rationality

This comment got me thinking about it. Of course LW being a website can only deal with verbalizable information(rationality). So what are we missing? Skillsets that are not and have to be learned in other ways(practical ways): interpersonal relationships being just one of many. I also think the emotional brain is part of it. There might me people here who are brilliant thinkers yet emotionally miserable because of their personal context or upbringing, and I think dealing with that would be important. I think a hollistic approach is required. Eliezer had already suggested the idea of a rationality dojo. What do you think?

7Will_Newsome14y
I've been talking to various people about the idea of a Rationality Foundation (working title) which might end up sponsoring or facilitating something like rationality dojos. Needless to say this idea is in its infancy.
2Morendil14y
The example of coding dojos for programmers might be relevant, and not just for the coincidence in metaphors.
5RomanDavis14y
I'm a draftsman and it always struck me how absolutely terrible the English language is for talking about ludicrously simple visual concepts precisely. Words like parallel and perpendicular should be one syllable long. I wonder if there's a way to apply rationality/ mathematical think beyond geometry and to the world of art.
0JoshB14y
According to wiki: "Tacit knowledge (as opposed to formal or explicit knowledge) is knowledge that is difficult to transfer to another person by means of writing it down or verbalizing it" Thus: "Effective transfer of tacit knowledge generally requires extensive personal contact and trust. Another example of tacit knowledge is the ability to ride a bicycle." Supports the dojo idea...perhaps in SecondLife once the graphics are better?
6Richard_Kennaway14y
How much personal contact and trust does it take to learn to ride a bicycle?
3RobinZ14y
As someone who learned cycling as a near-adult, the main insight is that you turn the wheel in the direction in which the bike is falling to push it back vertical. Once I had been told that negative-feedback mechanism, the only delay was until I got frustrated enough with going slowly to say, "heck with this 'rolling down a slight slope' game, I'm just going to turn the pedals." Whereupon I was genuinely riding the bicycle. ...for about a minute, until I got the bright idea of trying to jump the curb. Did you know that rubbing the knee off a pair of jeans will leave a streak of blue on concrete?
2Douglas_Knight14y
What was your total time frame in learning to ride? Was there a period before you were told about turning the wheel?
1RobinZ14y
I estimate the total time between donning the helmet and hitting the sidewalk was less than an hour - but it was probably a decade ago, so I don't trust my recollections.
1cousin_it14y
Hahaha, great catch. Though maybe they meant personal contact with a bicycle!
0Morendil14y
Uh, lots? Who did you learn it from?
7SilasBarta14y
Per my upcoming "Explain Yourself!" article, I am skeptical about the concept of "tacit knowledge". For one thing, it puts up a sign that says, "Hey, don't bother trying to explain this in words", which leads to, "This is a black box; don't look inside", which leads to "It's okay not to know how this works". Second, tacit knowledge often turns out to be verbalizable, questioning whether the term "tacit" is really calling out a valid cluster in thingspace[1]. For example, take the canonical example of learning to ride a bike. It's true that you can learn it hands-on, using the inscrutable, patient training of the master. But you can also learn it by being told the primary counterintuitive insights ("as long as you keep moving, you won't tip over"), and then a little practice on your own. In that case, the verbal knowledge has substituted one-for-one with (much of) the tacit learning you would have gained on your own from practice. So how much of it was "really" tacit all along? How much of it are you just calling tacit because the master never reflected on what they were doing? So for me, the appeal to "difficulty of verbalizing it" certainly has some truth to it, but I find it mainly functions to excuse oneself from critical introspection, and from opening important black boxes. I advise people to avoid using this concept if remotely possible; it tends to say more about you than the inherent inscrutability of the knowledge. [1] To someone who sucks at programming, the ability to revise a recipe to produce more servings is "tacit knowledge".
5Morendil14y
As someone who has made much of the concept of tacit knowledge in the past, I'll have to say you have a point. (I'm now considering the addendum: "made much of it because it served my interests to present some knowledge I claimed to have as being of that sort". I'm not necessarily endorsing that hypothesis, just acknowledging its plausibility.) It still feels as if, once we toss that phrase out the window, we need something to take its place: words are not universally an effective method of instruction, practice clearly plays a vital part in learning (why?), and the hypothesis that a learner reconstructs knowledge rather than being the recipient of a "transfer" in a literal sense strikes me as facially plausible given the sum of my learning experiences. Perhaps an adult can comprehend "as long as you keep moving, you won't tip over", but I have a strong intuition it wouldn't go over very well with kids, depending on age and dispositions. My parenting experience (anecdotal evidence as it may be) backs that up. You need to see what a kid is doing right or wrong to encourage the former and correct the latter, you need a hefty dose of patience as the kid's anxieties get in the way sometimes for a long while. Learning to ride a bike is a canonical example because it is taught early on, there is hedonic value in learning it early on, but it is typically taught at an age when a kid rarely (or so my hunch says) has the learning-ability to understand advice such as "as long as you keep moving, you won't tip over". There is such a thing as learning to learn (and just how verbalizable is that skill?). It's all too easy to overgeneralize from a sparse set of examples and obtain a simple, elegant, convincing, but false theory of learning. I hope your article doesn't fall into that trap. :)
2SilasBarta14y
I don't disagree, but I don't see how it contradicts my position either. The evidence you give against words being effective is that, basically, they don't fully constrain what the other person is being told to do, so they can always mess up in unpredictable ways. That's true, but it just shows how you need to understand the listener's epistemic state to know which insights they lack that would allow them to bridge the gap People do get this wrong, and end up giving "let them eat cake" advice -- advice that, if it were useful, the problem would have been solved. But at the same time, a good understanding of where they are can lead to remarkably informative advice. (I've noticed Roko and HughRistik are excellent at this when it comes to human sociality, while some are stuck in "let them eat cake" land.) Well, in my case, once it clicked for me, my thought was, "Oh, so if you just keep moving, you won't tip over, it's only when you stop or slow down that you tip -- why didn't he just tell me that?" Well, if it were a sparse set I wouldn't be so confident. I have a frustratingly long history of people telling me something can't be explained or is really hard to explain, followed by me explaining it to newbies with relative ease. And of cases where someone appeals to their inarticulable personal experience for justification, when really it was an articulable hidden assumption they could have found with a little effort. Anyone is welcome to PM me for an advance draft of the article if they're interested in giving feedback.
1NancyLebovitz14y
I'm in general agreement, but leaves me wondering if you underestimate how much effort it takes to notice and express how to do things which are usually non-verbal.
3SilasBarta14y
I don't understand. The part you quoted isn't about expressing how to do non-verbal things; it's about people who say, "when you get to be my age, you'll agree, [and no I can't explain what experiences you have as you approach my age that will cause you to agree because that would require a claim regarding how to interpret the experience which you have a chance of refuting]" What does that have to do with the effort need to express how to do non-verbal things?
0NancyLebovitz14y
Excuse me-- I wasn't reading carefully enough to notice that you'd shifted from claims that it was too hard to explain non-verbal skills to claims that it was too hard to explain the lessons of experience.
0SilasBarta14y
Okay. Well, then, assuming your remark was a reply to a different part of my comment, my answer is that yes, it may be hard, but for most people, I'm not convinced they even tried.
0Richard_Kennaway14y
xkcd
0mattnewport14y
Am I interpreting you correctly that you are not denying that some skills can only be learned by practicing the skill (rather than by reading about or observing the skill) but are saying that verbal or written instruction is just as effective as an aid to practice as demonstration if done well? I'm still a bit skeptical about this claim. When I was learning to snowboard for example it was clear that some instructors were better able to verbalize certain key information (keep your weight on your front foot, turn your body first and let the board follow rather than trying to turn the board, etc.) but I don't think the verbal instructions would have been nearly as effective if they were not accompanied by physical demonstrations. It's possible that a sufficiently good instructor could communicate just as effectively through purely verbal instruction but I'm not sure such an instructor exists. The fact that this is a rare skill also seems relevant even if it is possible - there are many more instructors who can be effective if they are allowed to combine verbal instruction with physical demonstrations.
1SilasBarta14y
Good points, but keep in mind snowboarding instructors aren't optimizing the same thing that a rationalist (in their capacity as a rationalist) is optimizing. If you just want to make money, quickly, and churn out good snowboarders, then use the best tools available to you -- you have no reason to convert the instruction into words where you don't have to. But if you're approaching this as a rationalist, who wants to open the black box and understand why certain things work, then it is a tremendously useful exercise to try to verbalize it, and identify the most important things people need to know -- knowledge that can allow them to leapfrog a few steps in learning, even and especially if they can't reach the Holy Grail of full transmission of the understanding. And I'd say (despite the first paragraph in this comment) that it's a good thing to do anyway. I suspect that people's inability to explain things stems in large part from a lack of trying -- specifically, a lack of trying to understand what mental processes are going on in side of them that allows a skill to work like it does. They fail to imagine what it is like not to have this skill and assume certain things are easy or obvious which really aren't. To more directly answer your question, yes, I think verbal instruction, if it understands the epistemic state of the student, can replace a lot of what normally takes practice to learn. There are things you can say that get someone in just the right mindset to bypass a huge number of errors that are normally learned hands-on. My main point, though, is that people severely overestimate the extent of their knowledge which can't be articulated, because the incentives for such a self-assessment are very high. Most people would do well to avoid appeals to tacit knowledge, an instead introspect on their knowledge so as to gain a deeper understanding of how it works, labeling knowledge as "tacit" only as a last resort.
1Blueberry14y
I would suspect this has more to do with the skill of the student in translating verbal descriptions into motions. You can perfectly understand a series of motions to be executed under various conditions, without having the motor skill to assess the conditions and execute them perfectly in real-time.
4Tyrrell_McAllister14y
I'm looking forward to your article, and I think that you're right to emphasize the vast gap between "unverbalizable" and "I don't know at the moment how to verbalize it". But, to really pass the "bicycle test", wouldn't you have to be able to explain verbally how to ride a bike so well that someone could get right on the bike and ride perfectly on the first try? That is, wouldn't you have to be able to eliminate even that "little practice on your own"? Or is there some part of being able to ride a bike that you don't count as knowledge, and which forms the ineliminable core that needs to be practiced?
2SilasBarta14y
Depends on what the "bicycle test" is testing. For me, the fact that something is staked out as a canonical, grounding example of tacit knowledge, and then is shown to be largely verbalizable, blows a big hole in the concept. It shows that "hey, this part I can't explain" was groundless in several subcases. I do agree that some knowledge probably deserves to be called tacit. But given the apparent massive relativity of tacitness, and the above example, it seems that these cases are so rare, you're best off working from the assumption that nothing is tacit, than from looking for cases that you can plausibly claim are tacit. It's like any other case where one possibility should be considered last. If you do a random test on General Relativity and find it to be way off, you should first work from the assumption that you, rather than GR, made a mistake somewhere. Likewise, if your instinct is to label some of your knowledge as tacit, your first assumption should be, "there's some way I can open up this black box; what am I missing?". Yes, these beliefs could be wrong -- but you need a lot more evidence before rejecting them should even be on the radar. (And to be clear, I don't claim my thesis about tacitness to deserve the same odds as GR!)
1Morendil14y
Just to be clear, I don't think it has been shown in the case of bike-riding that the knowledge can be transferred verbally. You can give someone verbal instruction that will help them improve faster at bike-riding, that isn't at issue. It's much less clear that telling someone the actual control algorithm you use when you ride a bike is sufficient to transform them from novice into proficient bike rider. You can program a robot to ride a bike and in that sense the knowledge is verbalizable, but looking at the source code would not necessarily be an effective method of learning how to do it.
1SilasBarta14y
I think being able to verbally transmit the knowledge that solves most of the problem for them is proof that at least some of the skill can be transferred verbally. And of course it doesn't help to tell someone the detailed control algorithm to ride a bike, and I wouldn't recommend doing so as an explanation -- that's not the kind of information they need! One day, I think it will be possible to teach someone to ride a bike before they ever use one, or even carry out similar actions, though you might need a neural interface rather than spoken words to do so. The first step in such a quest is to abandon appeals to tacit knowledge, even if there are cases where it really does exist.
3Richard_Kennaway14y
None, and nobody. I got a bicycle and tried to ride it until I could ride it. It took about three weeks from never having sat on a bicycle to confidently mixing with heavy traffic. (At the age of 22, btw. I never had a bicycle as a child.) The first line that JoshB quoted from Wikipedia is fine -- there is this class of knowledge -- but I don't agree with the second at all. Some things you can learn just by having a go untutored. Where an instructor is needed, e.g. in martial arts, the only trust required is enough confidence in the competence of the teacher to do as he says before you know why.
0Morendil14y
How typical is that bike-learning history in your estimation?
0Richard_Kennaway14y
I guess that more people learn to ride a bike in childhood than as adults, but I believe that the usual method at any age is to get on it and ride it. There really isn't much you can do to teach someone how to do it.
1Morendil14y
OK, so I suppose it doesn't take much personal contact and trust to acquire a skill of the bike-riding type. In particular if you're an autonomous enough learner, in particular if the skill is relatively basic. The original assertion, though, was about personal contact and trust being required to transfer a skill of the bike-riding type, and perhaps one reason to make this assertion is that the usual method involves a parent dispensing encouragement and various other forms of help, vis-a-vis a child. (I learnt it from my grandfather, and have a lot of positive affect to accompany the memories.) Providing an environment in which learning, an intrinsically risky activity, becomes safe and pleasurable - I know from experience that this takes rapport and trust, it doesn't just happen. Such an environment is perhaps not a prerequisite to acquiring a non-verbalized skill, but it does help a lot; as such it makes it possible for people who would otherwise give up on learning before they made it to the first plateau.
1Richard_Kennaway14y
We must have had very different experiences of many things. Tell me more about learning being risky. I have been learning Japanese drumming since the beginning of last year (in a class), and stochastic calculus in the last few months (from books), and "risky" is not a word it would occur to me to apply to either process. The only risk I can see in learning to ride a bicycle is the risk of crashing.
1Morendil14y
One major risk involved in learning is to your self-esteem: feeling ridiculous when you make a mistake, feeling frustrated when you can't get an exercise right for hours of trying, and so on. As you note, in physical aptitudes there is a non-trivial risk of injury. There is the risk, too, of wasting a lot of time on something you'll turn out not to be good at. Perhaps these things seem "safe" to you, but that's what makes you a learner, in contrast with large numbers of people who can't be bothered to learn anything new once they're out of school and in a job. They'd rather risk their skills becoming obsolete and ending up unemployable than risk learning: that's how scary learning is to most people.
2Richard_Kennaway14y
I would say that the problem then is with the individual, not with learning. Those feelings reset on false beliefs that no-one is born with. Those who acquire them learn them from unfortunate experiences. Others chance to have more fortunate experiences and learn different attitudes. And some manage in adulthood to expose their false beliefs to the light of day, clearly perceive their falsity, and stop believing them. Thus it is said, "The things that we learn prevent us from learning."
2JoshuaZ14y
I doubt people are consciously making this decision, but rather they aren't calculating the potential rewards as opposed to potential risks well. A risk that is in the far future is often taken less seriously than a small risk now.
0Morendil14y
People who buy insurance are demonstrating ability to trade off small risks now against bigger risks in the future, but often the same people invest less in keeping their professional skills current than they do in insurance. Personal experience tells me that I had (and still have) a bunch of Ugh fields related to learning, which suggest that there are actual negative consequences of engaging in the activity (per the theory of Ugh fields). My hunch is that the perceived risks of learning accounts in a significant part for why people don't invest in learning, compared to the low perceived reward of learning. I could well be wrong. How could we go about testing this hypothesis?
0JoshuaZ14y
I'm not sure. It may require a more precise statement to make it testable.
0Blueberry14y
Are you serious? I could never have learned to ride a bike without my parents spending hours and hours trying to teach me. Did you also learn to swim by jumping into water and trying not to drown? I'd be very surprised if most people learned to ride a bike without instruction, but I may be unusual.
1Morendil14y
There was actually at some point a theory that "babies are born knowing how to swim", and on one occasion at around age three, at a holiday resort the family was staying at, I was thrown into a swimming pool by a caretaker who subscribed to this theory. It seems that after that episode nobody could get me to feel comfortable enough in water to get any good at swimming (in spite of summer vacations by the seaside for ten years straight, under the care of my grandad who taught me how to ride a bike). I only learned the basics of swimming, mostly by myself with verbal instruction from a few others, around age 30.
0Blueberry14y
I'm so sorry. That is truly horrific abuse.
0Richard_Kennaway14y
Maybe there's a cultural difference, but I don't know what country you're in (or were in). I've never heard of anyone learning to ride a bike except by riding it. But clearly we need some evidence. I don't care for the bodge of using karma to conduct a poll, so I'll just ask anyone reading this who can ride a bicycle to post a reply to this comment saying how they learned, and in what country. "Taught" should mean active instruction, something more than just someone being around to provide comfort for scrapes and to keep children out of traffic until they're ready. Results so far: RichardKennaway: self-taught as adult, late 70's, UK Morendil: taught in childhood by grandfather, UK? Blueberry: taught in childhood by parents, where? So that's two to one against my current view, but those replies may be biased: other self-taught people will not have had as strong a reason to post agreement.
0SilasBarta14y
I dont't know how much this will support your position, but: mid 1980s, Texas, USA, by my father. And as I said above, it did take a while to learn, but afterward, my reaction was, "Wait -- all I have to do is keep in motion and I won't fall over. Why didn't he just say that all along?" That began my long history of encountering people who overestimate the difficulty of, or fail to simplify the process to teaching or justifying something. ETA: Also, I haven't ridden a bike in over 15 years, so that might be a good test of whether my "just keep in motion" heuristic allows me to preserve the knowledge.
6mattnewport14y
The fact that 'like riding a bike' is a saying used to describe skills that you never forget suggests that it wouldn't be a very good test.
0SilasBarta14y
Yeah, I wasn't so sure it would be a good test. Still, I'm not sure how well the "you don't forget how to learn a bike" hypothesis is tested, nor how much of its unforgettability is due to the simplicity of the key insights.
1NancyLebovitz14y
Most people don't store the insights of bike riding verbally-- the insights are stored kinesthetically. It seems to be much easier to forget math.
0SilasBarta14y
I don't disagree, but there's typically a barrier, increasing with time since last use, that must be overcome to re-access that kinesthetic knowledge. And think verbal heuristics like the one I gave can greatly shorten the time you need to complete this process.
0Blueberry14y
early 90s, US. I also had training wheels for a while first, which didn't actually teach me anything. I didn't learn until they were removed. And I also had someone running along for reassurance.
0Jowibou14y
Canada, mid 1960s. Brother tried to teach me but I mostly ignored him. Used bike with training wheels, which I raised higher and higher and removed completely after a couple of weeks.
0NancyLebovitz14y
United States, early 60s (I think it's worth mentioning when because cultures change), just given a bike with training wheels, and I figured it out myself.
0Morendil14y
France, but close enough. ;) There's some variation in method of instruction. My grandpa had fitted my bike with a long handle in the back and used that to help me balance after taking the training wheels off. With one of my kids I tried the method of gradually lifting the training wheels to make the balance more precarious over time. One of the other two just "got it", as I remember, in one or two sessions. Otherwise it was the standard riding down a slight slope and advising them "keep your feet on the pedals", and running alongside for reassurance.
0RomanDavis14y
The truth is, that's how most skilled artists learned to draw. In the past, there was a more formalized teaching role, often starting at age eight, and you can go through school and even get through art school having been given so little knowledge, that if you know how to draw a human from imagination, you can confidently say you are an autodidact. It's not because art, (particularly representational figure drawing, from imagination or not) is inherently unteachable, but a lot of people tend to think so. This is not the only skill like this, although I think it's one that's perhaps the least understood and where misinformation is the most tolerated.
0realitygrill14y
I think it would be great to systematically explore and develop useful skillsets, perhaps in a modular fashion. We do have sequences. I would join a rationality dojo immediately. What do you mean practical ways? I understand the difficulty of transferring kinesthetic or social understanding, but how can we overcome that in nonverbalized fashion?
1roland14y
Some things have to be shown, you have to sometimes take part in an activity to "get" it, learn by trial and error, get feedback pointing out mistakes that you are unaware of, etc...
2CannibalSmith14y
For example?
2RomanDavis14y
Do you think you could describe this image to an arbitrarily talented artist and end up with an image that even looked like it was based on it? http://smithandgosling.files.wordpress.com/2009/05/the-reader.jpg It's not so much, "Such insolence, our ideas are so awesome they can not be broken down by mere reductionism" as "Wow, words are really bad at describing things that are very different from what most of the people speaking the language do." I think you could make an elaborate set of equations on a cartesian graph and come up with a drawing that looked like it and say fill up RGB values #zzzzzz at coordinates x,y or whatever, but that seems like a copout since that doesn't tell you anything about how Fragonard did it.
2bogdanb14y
This reminds me of an exercise we did in school. (I don’t remember either when or for what subject.) Everyone was to make a relatively simple image, composed of lines, circles, triangles and the such. Then, without showing one’s image to the others, each of us was to describe the image, and the others to draw according to the description. The “target” was to obtain reproductions as close as possible to the original image. It’s surprisingly hard. It’s was a very interesting exercise for all involved: It’s surprisingly hard to describe precisely, even given the quite simple drawings, in such a way that everyone interprets the description the way you intended it. I vaguely remember I did quite well compared with my classmates in the describing part, and still had several “transcriptions” that didn’t look anywhere close to what I was saying. I think the lesson was about the importance of clear specifications, but then again it might have been just something like English (foreign language for me) vocabulary training. ---------------------------------------- An example: Draw a square, with horizontal & vertical sides. Copy the square twice, once above and once to the right, so that the two new squares share their bottom and, respectively, left sides with the original square. Inside the rightmost square, touching its bottom-right corner, draw another square of half the original’s size. (Thus, the small square shares its bottom-right corner with its host, and its top-left corner is on the center of its host.) Inside the topmost square, draw another half-size square, so that it shares both diagonals with its host square. Above the same topmost square, draw an isosceles right-angled triangle; its sides around the right angle are the same length as the large squares’; its hypotenuse is horizontal, just touching the top side of the topmost square; its right angle points upwards, and is horizontally aligned with the center of the original square. (Thus, the original square
0Larks14y
My mum had to do this take for her work, save with building blocks, and for the learning-impaired. Instructions like 'place the block flat on the ground, like a bar of soap' were useful. One nit-pick: when you say squares half the size, you mean with half the side length, or one quarter of the size.
0RobinZ14y
Color and line weight have not been specified, I note. Nor position relative to the canvas.
1Risto_Saarelma14y
You could probably get pretty good results without messing with complex equations, by first describing the full picture, then describing what's in four quadrants made by drawing vertical and horizontal lines that split the image exactly in half, then describing quadrants of these quadrants, split in a similar way and so on. The artist could use their skills to draw the details without an insanely complex encoding scheme, and the grid discipline would help fix the large-scale geometry of the image. Edit: A 3x3 grid might work better in practice, it's more natural to work with a center region than to put the split point right in the middle of the image, which most probably contains something interesting. On the other hand, maybe the lines breaking up the recognizable shapes in the picture (already described in casual terms for the above-level description) would help bring out their geometrical properties better. Edit 2: Michael Baxandall's book Patterns of Intetion has some great stuff on using language to describe images.
1RomanDavis14y
Drawing a photograph with the aid of a Grid is a common technique for making copyinng easier, although it's also sometimes used as a teaching tool for early artists. I'm not in love with this explanation (Loomis does much better) but this should give you the essential idea: http://drawsketch.about.com/od/drawinglessonsandtips/ss/griddrawing.htm As a teaching tool for people who can't draw, I haven't seen it be effective, but it's awesome if you've got a deadline and don't want to spend all your time checking and rechecking your proportions.I doubt it would be effective, since it's so easy for novice artists to screw up when they have the image right in front of them. There's a more effective method which uses a ruler or compass and is often used to copy Bargue drawings. Use precise measurements around a line at the meridian and essentially connect the dots. For the curious: http://conceptart.org/forums/showthread.php?t=121170 This might work long distance: "Okay, draw the next dot 9/32nds of an inch a way at 12 degrees down to the right." This still seems like a bit of a cop out, though. Yes, there are ways to assemble copies of images using a grid, but it doesn't help us figure out how such freehand images were made in the first place. We're not even taking a crack at the little black box.
0NancyLebovitz14y
Drawing on the Right Side of the Brain seems to be the classic for teaching people how to draw. It's a bunch of methods for seeing the details of what you're seeing (copying a drawing held upside down, drawing shadows rather than objects) so that you draw what you see rather than a mental simplified hieroglyphic of what you see.

New papers from Nick Bostrom's site.

2timtyler14y
2nd one "ANTHROPIC SHADOW: OBSERVATION SELECTION EFFECTS AND HUMAN EXTINCTION RISKS" - is good reading.
0gwern14y
Speaking of the Simulation argument, I just stumbled across (but haven't read) http://www.arsdisputandi.org/publish/articles/000338/index.html / http://www.arsdisputandi.org/publish/articles/000338/article.pdf :

This post is about the distinctions between Traditional and Bayesian Rationality, specifically the difference between refusing to hold a position on an idea until a burden of proof is met versus Bayesian updating.

Good quality government policy is an important issue to me (it's my Something to Protect, or the closest I have to one), and I tend to approach rationality from that perspective. This gives me a different perspective from many of my fellow aspiring rationalists here at Less Wrong.

There are two major epistemological challenges in policy advice, in addition to the normal difficulties we all have to deal with: 1) Policy questions fall almost entirely within the social sciences. That means the quality of evidence is much lower than it is in the physical sciences. Uncontrolled observations, analysed with statistical techniques, are generally the strongest possible evidence, and sometimes you have nothing but theory or professional instinct to work with.
2) You have a very limited time in which to find an answer. Cabinet Ministers often want an answer within weeks, a timeframe measured in months is luxurious. And often a policy proposal is too sensitive to discuss with the... (read more)

2xamdam14y
Reminded me of one of my favorite movie dialogues - from Sunshine. Context was actually physics, but the complexity of the situation and the time frame but the characters in the same situation as you with the Cabinet ministers. Capa: It's the problem right there. Between the boosters and the gravity of the sun the velocity of the payload will get so great that space and time will become smeared together and everything will distort. Everything will be unquantifiable. Kaneda: You have to come down on one side or the other. I need a decision. Capa: It's not a decision, it's a guess. It's like flipping a coin and asking me to decide whether it will be heads or tails. Kaneda: And? Capa: Heads... We harvested all Earth's resources to make this payload. This is humanity's last chance... our last, best chance... Searle's argument is sound. Two last chances are better than one. http://www.imdb.com/title/tt0448134/quotes?qt0386955
2James_K14y
Yes, that's a good example. There are times when a decision has to be made, and saying you don't know isn't very useful. Even if you have very little to go on, you still have to decide one way or the other.
0realitygrill14y
I am not at all like you. I don't have much interest in policy at all, and I do tend to refuse to hold a position, being very mindful of how easy it is to be completely off course (Probably from reading too much history of science. It's "the graveyard of dead ideas", after all.). I'm likely to tell the Cabinet Ministers to get off my back or they'll have absolutely useless recommendations. However, I think you have hit upon the point that makes Bayesianism attractive to me: it's rationality you can use to act in real-time, under uncertainty, in normal life. Traditional Rationality is slow.
0James_K14y
I see your point, the trouble is that a recommendation that comes too late often is absolutely useless. A lot of policy is time-dependant, if you don't act within a certain time frame then you might a swell do nothing. While sometimes doing nothing is the right thing to do, a late recommendation is often no better than no recommendation.
0realitygrill14y
Yeah, I forgot to add that you've budged me slightly from my staunch positivist attitude for social science. Thanks. Reading up on complex adaptive systems has made me just that much more skeptical about our ability to predict policy's effects, and perhaps biased me.
1James_K14y
It's nice to know I've had an influence :) As it happens, I'm pretty sceptical as to how much we can know as well. There's nothing like doing policy to gain an understanding of how messy it can be. While the social sciences have a less than wonderful record in developing knowledge (look at the record of development economics, as one example), and economic forecasting is still not much better than voodoo but it's not like there's another group out there with all the answers. We don't have all of the answers, or even most of them, but we're better than nothing, which is the only alternative.
5matt14y
Nothing is often a pretty good alternative. Government action always comes at a cost, even if only the deadweight loss of taxation (keyphrase "public choice" for reasons you might expect the cost to be higher than that). I'm not trying to turn this into a political debate, but you should consider doing nothing not necessarily a bad thing, and what you do not necessarily better.
2James_K14y
When I said "better than nothing" I was referring to advice, not the actual actions taken. My background is in economics so I'm quite familiar with both dead-weight loss of taxation and public choice theory, though these days I lean more toward Bryan Caplan's rational irrationality theory of government failure. I agree that nothing is often a good thing for governments to do, and in many cases that is the advice that Cabinet receives.
1mattnewport14y
Politicians' logic: “Something must be done. This is something. Therefore we must do it.”
-23[anonymous]14y

Forgive me if this is beating a dead horse, or if someone brought up an equivalent problem before; I didn't see such a thing.

I went through a lot of comments on dust specks vs. torture. (It seems to me like the two sides were miscommunicating in a very specific way, which I may attempt to make clear at some point.) But now I have an example that seems to be equivalent to DSvs.T, easily understandable via my moral intuition and give the "wrong" (i.e., not purely utilitarian) answer.

Suppose I have ten people and a stick. The appropriate infinite... (read more)

DSvsT was not directly an argument for utilitarianism, it was an argument for tradeoffs and quantitative thinking and against any kind of rigid rules, sacred values, or qualitative thinking which prevents tradeoffs. For any two things, both of which have some nonzero value, there should be some point where you are willing to trade off one for the other - even if one seems wildly less important than the other (like dust specks compared to torture). Utilitarianism provides a specific answer for where that point is, but the DSvsT post didn't argue for the utilitarian answer, just that the point had to be at less than 3^^^3 dust specks. You would probably have to be convinced of utilitarianism as a theory before accepting its exact answer in this particular case.

The stick-hitting example doesn't challenge the claim about tradeoffs, since most people are willing to trade off one person getting hit multiple times with many people each getting hit once, with their choice depending on the numbers. In a stadium full of 100,000 people, for instance, it seems better for one person to get hit twice than for everyone to get hit once. Your alternative rule (maximin) doesn't allow some tradeoffs, so it leads to implausible conclusions in cases like this 100,000x1 vs. 1x2 example.

5[anonymous]14y
I don't think maximising the minima is what you want. Suppose your choice is to hit one person 20 times, or five people 19 times each. Unless your intuition is different from mine, you'll prefer the first option.
4Nick_Tarleton14y
I don't think you can justifiably expect to be able to tell your brain something this self-evidently unrealistic, and have it update its intuitions accordingly.
4Blueberry14y
Oh, and I'd love to hear what you mean about this.
3Blueberry14y
There's one difference, which is that the inequality of the distribution is much more apparent in your example, because one of the options distributes the pain perfectly evenly. If you value equality of distribution as worth more than one unit of pain, it makes sense to choose the equal distribution of pain. This is similar to economic discussions about policies that lead to greater wealth, but greater economic inequality.
2RomanDavis14y
I think the point of Dust Specks Vs Torture was scope failure. Even allowing for some sort of "negative marginal utility" once you hit a wacky number 3^^^3, it doesn't matter. .000001 negative utility point multiplied by 3^^^3 is worse than anything, because 3^^^3 is wacky huge. For the stick example, I'd say it would have to depend on a lot of factors about human psychology and such, but I think I'd hit the one. Marginal utility tends to go down for a product, and I think that the shock of repeated blows would be less than the shock of the one against ten separate people. I think your opinion basically is an appeal to egalitarianism, since you expect negative utility to yourself from an unfair world where one person gets something that ten other people did not, for no good or fair reason.
1NancyLebovitz14y
I think you're mistaken about the marginal utility-- being hit again after you've already been injured (especially if you're hit on the same spot) is probably going to be worse than the first blow. Marginal disutility could plausibly work in the opposite direction from marginal utility. Each 10% of your money that you lose impacts your quality of life more. Each 10% of money that you gain impacts your quality of life less. There might be threshold effects for both, but I think the direction is right.
1RomanDavis14y
I was thinking more along the lines of scope failure: If some one said you were going to be hit 11 times would you really expect it to feel exactly 110% as bad as being hit ten times? But yes, from a traditional economics point of view, your post makes a hell of a lot more sense. Upvoted.
1Blueberry14y
Part of the assumption of the problem was that hitting with a stick has some constant negative utility for all the people.
0[anonymous]14y
It's always hard to think about this sort of thing. I read that in the original problem, but then I ended up thinking about actual hitting people with sticks when deciding what was best. Is there anything in the archives like The True Prisoner's Dilemma but for giving an intuitive version of problems with adding utility?
0RomanDavis14y
Then it depends. If you're a utilitarian, it is still better to hit the guy nine times than to hit ten people ten times. If you allow some ideas about the utility of equality, then things get more complicated. That's why I think most people reject the simple math that 9 < 10.
1snarles14y
I'd analyze your question this way. Ask any one of the ten people which they would prefer: A) to get hit B) to have a 1/10th chance of getting hit 9 times. Assuming rationality and constant disutility of getting hit, every one of them would choose B.

I have a theory: Super-smart people don't exist, it's all due to selection bias.

It's easy to think someone is extremely smart if you've only seen the sample of their most insightful thinking. But every time that happened to me, and I found that such a promising person had a blog or something like that, it universally took very little time to find something terribly brain-hurtful they've written there.

So the null hypothesis is: there's a large population of fairly-smart-but-nothing-special people, who think and publish their thought a lot. Because the best ... (read more)

[-][anonymous]14y170

I was thinking something similar just today:

Some people think out loud. Some people don't. Smart people who think out loud are perceived as "witty" or "clever." You learn a lot from being around them; you can even imitate them a little bit. They're a lot of fun. Smart people who don't think out loud are perceived as "geniuses." You only ever see the finished product, never their thought processes. Everything they produce is handed down complete as if from God. They seem dumber than they are when they're quiet, and smarter than they are when you see their work, because you have no window into the way they think.

In my experience, there are far more people who don't think out loud in math than in less quantitative fields. This may be part of why math is perceived as so hard; there are all these smart people who are hard to learn from, because they only reveal the finished product and not the rough draft. Rough drafts make things look feasible. Regular smart people look like geniuses if they leave no rough drafts. There may really be people who don't need rough drafts in the way that we mundanes do -- I've heard of historical figures like that, and those really are savants -- but it's possible that some people's "genius" is overstated just because they're cagey about expressing half-formed ideas.

You may be right about math. Reading the Polymath research threads (like this one) made me aware that even Terry Tao thinks in small and well-understood steps that are just slightly better informed than those of the average mathematician.

3NancyLebovitz14y
I Am a Strange Loop by Hofstadter may be of interest-- it's got a lot about how he thinks as well as his conclusions.
7snarles14y
I'm not a psychologist but I thought I could improve on the vagueness of the original discussion. There are a few factors which determine "smartness" (or potential for success): 1. Speed. Having faster hardware. 2. Pattern Recognition. Being better at "chunking". 3. Memory. 4. Creativity. (="divergent" thinking.) 5. Detail-awareness. 6. Experience. Having incorporated many routines into the subconscious thanks to extensive practice. 7. Knowledge. (Quality is more important than quantity.) The first five traits might be considered part of someone's "talent." Experience and knowledge, which I'll group together as "training", must be gained through hard work. Potential for success is determined by a geometric (rather than additive) combination of talent and training: that is, roughly, potential for success=talent * training All this math, of course, is not remotely intended to be taken at face value, but it's merely the most efficient way to make my point. The "super-smart" start life with more talent than average. The rule of the bell curve holds, so they generally do not have an overwhelming cognitive advantage over the average person. But they have enough talent to justify investing much more of their resources into training. This is because a person with 15 talent will gain 15 success for every unit of time they put into training, while a unit of training is worth 17 success for a person with 17 talent. The less time you have to spend, the more time costs, so all other things being equal, the person with more talent will put more time into training. Suppose the person with 15 talent puts 100 units of time into training, and the person with 17 talent puts 110 units of time into training. Then: person with 15 talent * 100 training => 15000 success person with 17 talent * 110 training => 18700 success Which is 25% more success for only 13% more talent. There's probably some more formal work done along these lines, I'm not an economist either.
6NancyLebovitz14y
If you're interpreting "super-smart" to mean always right, or at least reasonable, and thus never severely wrong-headed, I think you're correct that no one like that exists, but it seems like a rather comic bookish idea of super-smartness. Also, I have no idea how good your judgment is about whether what you call brain-hurtful is actually ideas I'd think were egregiously wrong. I think there are a lot of folks smart enough to be special people-- those who come up with worthwhile insights frequently. And even if it's just a matter of generating lots of ideas and then publishing the best, recognizing the best is a worthwhile skill. It's conceivable that idea-generation and idea-recognizing are done by two people who together give the impression of one person who's smarter than either of them.
2dyokomizo14y
How would you describe the writing patterns of super-smart people? Similarly, how would meeting/talking/debating them would feel like?
4taw14y
I think my comment was rather vague, and people aren't sure what I meant. This is all my impressions, as far as I can tell evidence of all that is rather underwhelming; I'm writing this more to explain my thought than to "prove" anything. It seems to me that people come in different level of smartness. There are some people with all sort of problems that make them incapable of even human normal, but let's ignore them entirely here. Then, there are normal people who are pretty much incapable of original highly insightful thought, critical thinking, rationality etc. They can usually do OK in normal life, and can even be quite capable in their narrow area of expertise and that's about it. They often make the most basic logic mistakes etc. Then there are "smart" people who are capable of original insight, and don't get too stupid too often. They're not measuring example the same thing, but IQ tests are capable of distinguishing between those and the normal people reasonably well. With smart people both their top performance and their average performance is a lot better than with average people. In spite of that, all of them very often fail basic rationality for some particular domains they feel too strongly about. Now I'm conflicted if people who are so much above "smart" as "smart" is above normal really exists. A canonical example of such person would be Feynman - from my limited information he seems to be just so ridiculously smart. Eliezer seems to believe Einstein is like that, but I have even less information about him. You can probably think of a few such other people. Unfortunately there's a second observation - there's no reason to believe such people existed only in the past, or would have aversion to blogging - so if super-smart people exist, it's fairly certain that some blogs of such people exist. And if such blogs existed, I would expect to have found a few by now. And yet, every time it seemed to me that someone might just be that smart and I start
6cousin_it14y
A few people who blog frequently and fit my criteria for "super-smart": Terence Tao, Cosma Shalizi, John Baez.
3Risto_Saarelma14y
I was thinking of Tao as well. Also, Oleg Kiselyov for programming/computer science.
0cousin_it14y
Yep, seconding the recommendation of Oleg. I read a lot of his writings and I'd definitely have included him on the list.
0cupholder14y
Interesting picks. I hadn't thought of Cosma Shalizi as 'super-smart' before, just erudite and with a better memory for the books and papers he's read than me. Will have to think about that...
5CronoDAS14y
I think you're giving the "normal person" too little credit.
4NancyLebovitz14y
Agreed. If nothing else, refugee situations aren't that uncommon in human history, and the majority are able to migrate and adapt if they're physically permitted to do so.
4dyokomizo14y
It doesn't seem to me that you have an accurate description of what a super-smart person would do/say other than match your beliefs and providing insightful thought. For example, do you expect super-smart people to be proficient in most areas of knowledge or even able to quickly grasp the foundations of different areas through super-abstraction? Would you expect them to be mostly unbiased? Your definition needs to be more objective and predictive, instead of descriptive.
1taw14y
I don't know what's the correct super-smartness cluster, so I cannot make objective predictive definition, at least yet. There's no need to suffer from physics envy here - a lot of useful knowledge has this kind of vagueness. Nobody managed to define "pornography" yet, and it's far easier concept than "super-smartness". This kind of speculation might end up with something useful with some luck (or not). Even defining by example would be difficult. My canonical examples would be Feynman and Einstein - they seem far smarter than the "normally smart" people. Let's say I collected a sufficiently large sample of "people who seem super-smart", got as accurate information about them as possible, and did a proper comparison between them and background of normally smart people (it's pretty easy to get good data on those, even by generic proxies like education - so I'm least worried about that) in a way that would be robust against even large number of data errors. That's about the best I can think of. Unfortunately it will be of no use as my sample will be not random super-smart people but those super-smart people who are also sufficiently famous for me to know about them and be aware of their super-smartness. This isn't what I want to measure at all. And I cannot think of any reasonable way to separate these. So the project is most likely doomed. It was interesting to think about this anyway.
3Mitchell_Porter14y
Why would they blog? They would already know that most people have nothing of interest to tell them; and if they want to tell other people something, they can do it through other channels. If such a person had a blog, it might be for a very narrow reason, and they would simply refrain from talking about matters guaranteed to produce nothing but time-consuming stupidity in response.
3JoshuaZ14y
I'm not sure that the ability to have original thoughts is at all closely connected to the ability to think rationally. What makes you reach that conclusion? Have you tried looking at Terence Tao's blog? I think he fits your model, but it may be that many of his posts will be too technical for a non-mathematician. I'm not sure in general if blogging is a good medium for actually finding this sort of thing. It is easy to see if a blogger isn't very smart. it isn't clear to me that it is a medium that allows one to easily tell if someone is very smart.
2xamdam14y
I doubt your disproof of super-smart people, for the very same reasons you do, perhaps with a greater weight assigned to those reasons. I am also not sure about your definition of super-smart. Is idiot savant (in math, say) super-smart? If you mean super-smart=consistently rational, I suspect nothing prevents people of normal-smart IQ from scoring (super) well there, trading off quantity of ideas for quality. There is a ceiling there as good ideas get more complex and require more processing power, but I suspect given how crazy this world is Norm Smart the Rationalist can score surprisingly highly on relative basis. As a data point you might want to look at "Monster Minds" chapter of Feynman's "Surely you're joking". Since you mentioned Feynman. The chapter is about Einstein. Finally, where is your blog? ;)
1taw14y
My blog is here.
3Vladimir_Nesov14y
You can set that in "preferences".
1cupholder14y
Reminds me of 'My Childhood Role Model'. As for the actual meat of your comment, I don't have much to add. 'Smart' is a slippery enough word that I'd guess one's belief in 'super-smart people' depends on how one defines 'smart.'
0DanielVarga14y
There is an important systematic bias you only tangentially mention in your analysis. Super-smart people (more generally, very successful people) don't feel they have to prove themselves all the time. (Especially if they are tenured. :) ) Many of them like to talk before they think. There are very smart people around them who quickly spot the obvious mistakes and laboriously complete the half-baked ideas. It is just more economic this way.
0Jack14y
Have you never had an in-person conversation with a super-smart person? Also, hi folks, I'm back. It is surprisingly difficult to dive back into LW after leaving it for a few weeks.
0taw14y
Obviously no, as I don't believe in their existence.
2Jack14y
My point is that I have trouble telling the difference between a fairly-smart and super-smart person by their writing for exactly the reason you mentioned. But in-person conversations give you access to the raw material and, if I take myself to be fairly smart there are definitely super-smart people out there. For example, I imagine if you had got to talking to Richard Feynman while he was alive you would have quickly realized he was a super-smart person.
5JoshuaZ14y
I'm not sure about this. I have a lot of trouble distinguishing between just smart, super-smart, and smart-and-an-expert-in-their-field. Distinguishing them seems to not occur easily simply based on quick interactions. I can distinguish people in my own field to some extent, but if it isn't my own area, it is much more difficult. Worse, there are serious cognitive biases about intelligence estimations. People are more likely to think of someone as smart if they share interests and also more likely to think of someone as smart if they agree on issues. (Actually I don't have a citation for this one and a quick Google search doesn't turn it up, does someone else maybe have a citation for this?) One could imagine that many people might if meeting a near copy of themselves conclude that the copy was a genius. That said, I'm pretty sure that there are at least a few people out there who reasonably do qualify as super-smart. But to some extent, that's based more on their myriad accomplishments than any personal interaction.
1taw14y
I'd guess it's far far easier to fool someone in person with all the noise of primate social clues, so such information is worth a lot less than writing.

The Unreasonable Effectiveness of My Self-Exploration by Seth Roberts.

This is an overview of his self-experiments (to improve his mood and sleep, and to lose weight), with arguments that self-experimentation, especially on the brain, is remarkably effective in finding useful, implausible, low-cost improvements in quality of life, while institutional science is not.

There's a lot about status and science (it took Roberts 10 years to start getting results, and it's just to risky to careers for scientists to take on projects which last that long), and some int... (read more)

7ocr-fork14y
I winced.
2Daniel_Burfoot14y
I would like to see a top-level link post and discussion of this article (and maybe other related papers).
2cupholder14y
I'm slightly tempted to, because that article is sloppy and unfocused enough that it annoys me, even though it's broadly accurate. (I mean, 'the standard statistical system for drawing conclusions is, in essence, illogical'? Really?) But I don't know what I'd have to add to it, really, other than basically whining 'it is so unfair!'
0Seth_Goldin14y
Yeah, that would be great, but I can't do it; I don't have the technical background, so I hereby delegate the task to someone else willing to write it up.

I've been reading the Quantum Mechanics sequence, and I have a question about Many-Worlds. My understanding of MWI and the rest of QM is pretty much limited to the LW sequence and a bit of Wikipedia, so I'm sure there will be no shortage of people here who have a better knowledge of it and can help me.

My question is this: why are the Born Probabilites a problem for MWI?

I'm sure it's a very difficult problem, I think I just fail to understand the implications of some step along the way. FWIW, my understanding of the Born Probabilities mainly clicks here:

I

... (read more)
[-][anonymous]14y110

So... If a quantum event has a 30% chance of going LEFT and a 70% chance of going right . . . you'll have a 30% probability of observing LEFT and a 70% probability of observing RIGHT.

So why is this surprising?

The surprising (or confusing, mysterious, what have you) thing is that quantum theory doesn't talk about a 30% probability of LEFT and a 70% probability of RIGHT; what it talks about is how LEFT ends up with an "amplitude" of 0.548 and RIGHT with an "amplitude" of 0.837. We know that the observed probability ends up being the square of the absolute value of the amplitude, but we don't know why, or how this even makes sense as a law of physics.

3Spurlock14y
Ah. So it's not the idea that it's weighted so much as the specific act of squaring the amplitude. "Why squaring the amplitude, why not something else?". I suppose the way I had been reading, I thought that the problem came from expecting a different result given the squared amplitude probability thing, not from the thing itself. That is helpful, many thanks.
6Douglas_Knight14y
That's one issue, but as Warrigal said, the other issue is "how this even makes sense." it seems to say that the amplitude is a measure of how real the configuration is.
0[anonymous]14y
That's one issue, but as Warrigal said, the other issue is "how this even makes sense." it seems to say that the amplitude is a measure of how real the configuration is.
0[anonymous]14y
Yes, precisely.
1NancyLebovitz14y
Delightful, and has a nice breakdown of the sort of questions to ask yourself (what exactly is the problem, how much precision is actually needed, what is the condition of the tools, etc.) if you want to get things done efficiently.

After more-or-less successfully avoiding it for most of LW's history, we've plunged headlong into mind-killer territory. I'm a little bit worried, and I'm intrigued to find out what long-time LWers, especially those who've been hesitant about venturing that direction, expect to see as a result over the next month or two.

It doesn't look encouraging. The discussions just don't converge, they meander all over the place and leave no crystalline residue of correct answers. (Achievement unlocked: Mixed Metaphor)

5simplicio14y
It is problematic but necessary, in my opinion. Politics IS the mind-killer, but politics DOES matter. Avoiding the topic would seem to be an admission that this rationality thing is really just a pretty toy. But it would be nice to lay down some ground-rules.
2mattnewport14y
I don't think anyone has mentioned a political party or a specific current policy debate yet. That's when things really go downhill.
4khafra14y
I think a current policy debate has potential for better results, since it would offer the potential for betting, and avoid some of the self-identification and loyalty that's hard to avoid when applying a model as simple as a political philosophy to something as complex as human culture.
1fburnaby14y
Since we've had some discussion about additions/modifications to the site, and LW -- as I understand it -- was a originally a sort of spin-off from OB, maybe addition of a karma-based prediction market of some sort would be suitable (and very interesting).
1JoshuaZ14y
Maybe make bets of karma? That might be very interesting. It would have less bite than monetary stakes, but highly risk averse individuals might be more willing to join the system.
2fburnaby14y
I think having such a low-stakes game to play would be beneficial not only to highly risk-averse individuals, but to anyone. It would provide a useful training ground (maybe even a competitive ladder in a rationality dojo) for anyone who wants to also play with higher stakes elsewhere. Edit: I'm currently a mediocre programmer (and intend to become good via some practice). And while I don't participate often in the community (yet), this could be fun and educational enough that I would be willing to contribute a fairly substantial amount of labour to it. If anyone with marginally more know-how is willing to implement such an idea, let me know and I'll join up.
1Matt_Duing14y
My feelings on this are mixed. I've found LW to be a refreshing refuge from such quarrels. On the other hand, without careful thought political debates reliably descend into madness quickly, and it is not as if politics is unimportant. Perhaps taking the mental techniques discussed here to other forums could improve the generally atrocious level of reasoning usually found in online political discussions, though I expect the effect would be small.

Are there any rationalist psychologists?

Also, more specifically but less generally relevant to LW; as a person being pressured to make use of psychological services, are there any rationalist psychologists in the Denver, CO area?

1Kevin14y
As a start, http://en.wikipedia.org/wiki/Cognitive_behavioral_therapy is a branch of psychotherapy with some respect around here because of the evidence that it sometimes works, compared to the other fields of psychotherapy with no evidence.
1RomanDavis14y
Do they really have such a poor track record? I know some scientists have very little respect for the "soft" sciences, but sociologist can at least make generalizations from studies done on large scales. Psychotherapy makes a lot of people incredulous, but iis it really fair to say that most methods in practice today are ~0% effective? Yes this is essentially a post stating my incredulity. Would you mind quelling it?
2pjeby14y
It's not that they're 0% effective, it's that they're not much more effective than placebo therapy (i.e. being put on a waiting list for therapy), or keeping a journal. CBT is somewhat more effective, but I've also heard that it's not as effective for high-ruminators... i.e., people who already obsess about their thinking.
3AlanCrowe14y
Scientific medicine is difficult and expensive. I worry that the apparent success of CBT may be because methodological compromises needed to make the research practical happen to flatter CBT more than they flatter other approaches. I might be worrying about the wrong thing. Do we know anything about the usefulness of Prozac in treating depression? Since we turn a blind eye to the unblinding of all our studies by the sexual side-effects of Prozac, and also refuse to consider the direct impact of those side-effects it could be argued that we don't actually have any scientific knowledge of the effectiveness of the drug.
0Douglas_Knight14y
The claim I've seen associated with Robyn Dawes is that therapy is useful (which I read as "more useful than being on a waiting list"), but that untrained therapists are just as good as those trained under most methods. (ETA: and, contrary to Kevin, they have been tested and found wanting)
1Kevin14y
It's not that other forms of psychotherapy are scientifically shown to be 0% effective; it's just that evidence-based psychotherapy is a surprisingly recent field. Psychotherapy can still work even if some fields of it have not had rigorous studies showing their effectiveness... but you might as well go with a therapist that has training in a field of psychotherapy that has some scientific method behind it. http://www.mentalhelp.net/poc/view_doc.php?type=doc&id=13023&cn=5
1torekp14y
I can't help you with the Denver area in particular, but the general answer is a definite yes. In an interesting juxtaposition, American Psychologist magazine had a recent issue prominently featuring discussion of how to get past the misuse of statistics discussed in this very LW open thread. And it's not the first time the magazine addressed the point.
1NancyLebovitz14y
Does cognitive rationalist therapy count as both rationalist and psychology for purposes of this question? I think Learning Methods is a more sophisticated rationalist approach than CBT (it does a more meticulous job of identifying underlying thoughts), and might be worth checking into.
2pjeby14y
Interesting. I found the site to be not very helpful, until I hit this page, which strongly suggests that at least one thing people are learning from this training is the practical application of the Mind Projection Fallacy: The quote is from an article written by an LM student, and some insights from the learning process that helped her overcome her stage fright. IOW, at least one aspect of LM sounds a bit like "rationality dojo" to me (in the sense that here's an ordinary person with no special interest in rationalism, giving a beautiful (and more detailed than I quoted here) explanation of the Mind Projection Fallacy, based on her practical applications of it in everyday life . (Bias disclaimer: I might be positively inclined to what I'm reading because some of it resembles or is readily translatable to aspects of my own models. Another article that I'm in the middle of reading, for example, talks about the importance of addressing the origins of nonconsciously-triggered mental and physical reactions, vs. consciously overriding symptoms -- another approach I personally favor.)

The blog of Scott Adams (author of Dilbert) is generally quite awesome from a rationalist perspective, but one recent post really stood out for me: Happiness Button.

Suppose humans were born with magical buttons on their foreheads. When someone else pushes your button, it makes you very happy. But like tickling, it only works when someone else presses it. Imagine it's easy to use. You just reach over, press it once, and the other person becomes wildly happy for a few minutes.

What would happen in such a world?

...

We already have these buttons on LessWrong... ;)

3cousin_it14y
Karma does make me feel important, but when it comes to happiness karma can't hold a candle to loud music, alcohol and girls (preferably in combination). I wish more people recognized these for the eternal universal values they are. If only someone invented a button to send me some loud music, alcohol and girls, that would be the ultimate startup ever.
5Vladimir_Nesov14y
Classical game theorists establish a scientific consensus that the only rational course of action is not to push the buttons. Anyone who does is regarded with contempt or pity and gets lowered in the social stratum, before finally managing to rationalize the idea out of conscious attention, with the help of the instinct to conformity. A few free-riders smugly teach the remaining naive pushers a bitter lesson, only to stop receiving the benefit. Everyone gets back to business as usual, crazy people spinning the wheels of a mad world.
7Wei Dai14y
Are you saying that classical game theorists would model the button-pushing game as one-shot PD? Why would they fail to notice the repetitive nature of the game?
2khafra14y
I'd be far more willing to believe in game theorists calling for defection on the iterated PD than in mathematicians steering mainstream culture. However, with the positive-sum nature of this game, I'd expect theorists to go with Schelling instead of Nash; and then be completely disregarded by the general public who categorize it under "physical ways of causing pleasure" and put sexual taboos on it.
1Vladimir_Nesov14y
The theory says to defect in the iterated dilemma as well (under some assumptions).
3cousin_it14y
Here's what the theory actually says: if you know the number of iterations exactly, it's a Nash equilibrium for both to defect on all iterations. But if you know the chance that this iteration will be the last, and this chance isn't too high (e.g. below 1/3, can't be bothered to give an exact value right now), it's a Nash equilibrium for both to cooperate as long as the opponent has cooperated on previous iterations.
0AlephNeil14y
This comment was very entertaining... but... I actually do think people in such a world ought not to press buttons. But not very strongly... only about the same "oughtnotness" as people ought not to waste time looking at porn. The argument is the same: Aren't there better things we could be doing? Ideally, in button-world, people will devise a way to remove their buttons. But if that couldn't be done, and we're seriously asking "what would happen?" I suppose it might end up being treated like sex. Having one's button publicly visible is "indecent" - buttons are only pushed in private. Etc. etc.
4Blueberry14y
The analogy to sex is rough. From a historical and evolutionary perspective, sex is treated the way it is because it leads to gene replication and parenthood, not because it leads to pleasure. The lack of side effects from the buttons makes them more comparable to rubbing someone's back, smiling, or saying something nice to someone.
3AlephNeil14y
OK - well that's one possibility. But in discussing either of these analogies, aren't we just showing (a) that the pleasure-button scenario is underdetermined, because there are many different kinds of pleasure and (b) that it's redundant, because people can actually give each other pats on the back, or hand-jobs or whatever.
4Mass_Driver14y
I dunno, this strikes me as a somewhat sex-negative attitude. Responding seriously to your question about the better things we could be doing, it strikes me that we people spend most of our time doing worthless things. We seldom really know whether we are happy, what it means to be happy, or how what we are doing might connect to somebody's future happiness. If the buttons actually made people happy from time to time, it could be quite useful as a 'reality check.' People suspecting that X led to happiness could test and falsify their claim by seeing whether X produced the same mental/emotional state that the button did. Obviously we shouldn't spend all our time pressing buttons, having sex, or looking at porn. But I sometimes wonder whether we wouldn't be better off if most people, especially in the developed world where labor seems to be over-supplied and the opportunity cost of not working is low, spent a couple hours a day doing things like that.
3AlephNeil14y
Isn't that a bit like snorting some coke (or perhaps just masturbating) after a happy experience (say, proving a particularly interesting theorem) to test whether it was really 'happy'? There are many different kinds of 'happiness', and what makes an experience a happy or an unhappy one is not at all simple to pin down. A kind of happiness that one can obtain at will, as often as desired, and which is unrelated to any "objective improvement" in oneself or the things one cares about, isn't really happiness at all. Pretend it's new year's eve and you're planning some goals for next year - some things that, if you achieve them, you will look back with pride and a sense of accomplishment. Is 'looking at lots of porn' on your list (even assuming that it's free and no-one was harmed in producing it)? I don't mean to imply anything about sex, because sex has a whole lot of things associated with it that make it extremely complicated. But the 'pleasure button' scenario gives us a clean slate to work from, and to me it seems an obvious reductio ad absurdum of the idea that pleasure = utility.
2Blueberry14y
You seem to be confusing happiness with accomplishment: Sure it is. It may not be accomplishment, or meaningfulness, but it is happiness, by definition. I think the confusion comes because you seem to value many other things more than happiness, such as pride and accomplishment. Happiness is just a feeling; it's not defined as something that you need to value most, or gain the most utility from.
0AlephNeil14y
How do you distinguish a degenerate case of 'happiness' from 'satiation of a need'. Is the smoker or heroin addict made 'happy' by their fix? Does a glass of water make you 'happy' if you're dying from thirst, or does it just satiate the thirst? And can't the same sensation be either 'happy' or 'unhappy' depending on the circumstances. A person with persistent sexual arousal syndrome isn't made 'happy' by the orgasms they can't help but 'endure'. The idea that there's a "raw happiness feeling" detachable from the information content that goes with it is intuitively appealing but fatally flawed.
1Blueberry14y
Yes, this is true. We will need to assume that the button can analyze the context to determine how to provide happiness for the particular brain it's attached to. My point is that happiness is not necessarily associated with accomplishment or objective improvement in oneself (though it can be). In such a situation, some people might not value this kind of detached happiness, but that doesn't mean it's not happiness.
0RomanDavis14y
Depends on how you define happiness. If you define it as "how much dopamine is in my system" ,"joy" or "these are the neat brainwaves my brain is giving off" then yes, you could achieve happiness by pressing a button (in theory). A lot of people seem to assume happiness = utility measured in utilons, which is a whole different thing altogether. Sort of like seeing some one writhe in ecstasy after jamming a needle in their arm and saying, "I'm so happy I'm not a heroin addict."
1SilasBarta14y
Oh, really? How can I get a cheap, legal, repeatable dopamine rush to my brain?
2RomanDavis14y
Edited my post to reflect your point. Although, I'm a young male and can achieve orgasm multiple times in under ten minutes with the aid of some lube and free porn. You probably didn't want to know that.
0Blueberry14y
That's amazing. A drug that could eliminate refractory period like that would sell better than Viagra.
1cousin_it14y
It seems the pharma industry discovered the effect of PDE5 inhibitors on erectile dysfunction pretty much by accident. The stuff was initially developed to treat heart disease, initial tests showed it didn't work, but male test subjects reported a useful side effect. Reminds me of the story of post-it notes: the guy who developed them actually wanted to create the ultimate glue, but sadly the result of his best efforts didn't stick very well, so he just went ahead and commercialized what he had. If big pharma is listening, I'd like to post a request for exercise pills.
-1RomanDavis14y
Actually, orgasms are usually much less intense and don't result in ejaculation if I achieve them in under a certain amount of amount of time. I find the best are in the 20-30 minute period.
0Blueberry14y
Yes, I've noticed that assumption, and I think even Jeremy Bentham talked about pleasure in utility terms. I don't think it's accurate for everyone, for instance, someone who values accomplishment more than happiness will assign higher utility to choices that lead to unhappy accomplishment than to unproductive leisure.
-1RomanDavis14y
...and then they're happier working. By definition. Welcome to semantics.
0Blueberry14y
That's a strange definition of "happier". They're happier with a choice just because they prefer that choice? Even if they appear frustrated and tired and grumpy all the time? Even if they tell you they're not happy and they prefer this unhappiness to not accomplishing anything? (In real life, I suspect happy people actually accomplish more, but consider a hypothetical where you have to choose between unhappy accomplishment and unproductive leisure.)
-1RomanDavis14y
Eliezer did this whole thing in the Fun Theory sequence. Yes, not doing anything would be very boring, and being filled with cool drugs sounds like a horror story to my current utility curve. Let's hope the future isn't some form of ironic hell.
0Mass_Driver14y
AlephNeil, I was taking Scott Adams' assertion that the button produces "happiness" at face value. I was being rather literal, I'm afraid. I think you're right to worry that no actual mechanism we can imagine in the near future would act like Scott's button. I stand by my point, though, that if we really did have a literal happiness button, it would probably be a good thing. As perhaps a somewhat more neutral example, I like to splash around in a swimming pool. It's fun. I hope to do that a lot over the next year or so. If I successfully play in the pool a lot during time that otherwise might have been spent reading marginally interesting articles, staring into space, harassing roommates, or working overtime on projects I don't care about, I will consider it a minor accomplishment. More to the point, if regular bouts of aquatic playtime keep me well-adjusted and accurately tuned-in to what it means to be happy, then I will rationally expect to accomplish all kinds of other things that make me and others happy. I will consider this to be a moderate accomplishment. There is a difference between pleasure and utility, but I don't think it's ridiculous at all to have a pleasure term in one's utility function. A more pleasant life, all else being equal, is a better one. There may be diminishing returns involved, but, well, that's why we shouldn't literally spend all day pressing the button.
0NancyLebovitz14y
That depends on how people react. It's at least plausible that people need some amount of pleasure in order to be able to focus on their other goals.
0Houshalter14y
How does that work? I suppose it makes sense a little considering that the world has to go on and can't stop because everyones on the ground being "happy", but it wouldn't mean that people wouldn't do it, or even that it wouldn't be the "rational" thing to do.

Is everyone missing the obvious subtext in the original article - that we already live in just such a world but the button is located not on the forehead but in the crotch?

Perhaps some people would give their button-pushing services away for free, to anyone who asked. Let's call those people generous, or as they would become known in this hypothetical world: crazy sluts.

4CronoDAS14y
But you can touch that button yourself...
5SilasBarta14y
How does that compare to when someone else touches your button with their button?
5CronoDAS14y
I've never done that, so I don't know.
3Richard_Kennaway14y
I see that subtext, but I also see a subtext of geeks blaming the obvious irrationality of everyone else for them not getting any, like, it's just poking a button, right?
3Blueberry14y
Except that sex, unlike the button in the story, doesn't always make people happy. Sometimes, for some people, it comes with complications that decrease net utility. (Also, it is possible to push your own button with sex.)
4mattnewport14y
Sure, but it's not my comparison - I'm just saying it appears to be the obvious subtext of the original article.
1Houshalter14y
But two poor, "lonely" people could just get together and push each others buttons. Thats the problem with this, any two people that can cooperate with each other can get the advantage. There was once an expiriment to evolve different programs in a genetic algorithm that could play the prisoners dilema. I'm not sure exactly how it was organized, which would really make or break different strategies, but the result was a program which always cooperated except when the other wasn't and it continued refusing to cooperate with the other untill it believed they were "even".
1mattnewport14y
Are you thinking of tit for tat? I'm not trying to argue for or against the comparison. Would you agree that the subtext exists in the original article or do you think I'm over-interpreting?
1bentarm14y
No, the subtext is definitely there in the original article. At least, I saw it immediately, as did most of the commenters:
0Houshalter14y
I think the best analogy would be drugs, but those have bad things associated with them that the button example doesn't. They take up money, they cause health problems, etc.
0Vladimir_Nesov14y
That would not model the True Prisoner's Dilemma.
0mattnewport14y
What's that got to do with the price of eggs?
4Alicorn14y
A social custom would be established that buttons are only to be pressed by knocking foreheads together. Offering to press a button in a fashion that doesn't ensure mutuality is seen as a pathetic display of low status.

Pushing someone's happiness button is like doing them a favor, or giving them a gift. Do we have social customs that demand favors and gifts always be exchanged simultaneously? Well, there are some customs like that, but in general no, because we have memory and can keep mental score.

3cousin_it14y
Hah. Status is relative, remember? Your setup just ensures that "dodging" at the last moment, getting your button pressed without pressing theirs, is seen as a glorious display of high status.

William Saletan at Slate is writing a series of articles on the history and uses of memory falsification, dealing mainly with Elizabeth Loftus and the ethics of her work. Quote from the latest article:

Loftus didn't flinch at this step. "A therapist isn't supposed to lie to clients," she conceded. "But there's nothing to stop a parent from trying something like [memory modification] with an overweight child or teen." Parents already lied to kids about Santa Claus and the tooth fairy, she observed. To her, it was a no-brainer: "A

... (read more)
0billswift14y
Interesting. I have read several of Loftus's books, but the last one was The Myth of Repressed Memory: False Memories and Allegations of Sexual Abuse over ten years ago. I think I'll go see what she has written since. Thanks for reminding me of her work.

This might be old news to everyone "in", or just plain obvious, but a couple days ago I got Vladimir Nesov to admit he doesn't actually know what he would do if faced with his Counterfactual Mugging scenario in real life. The reason: if today (before having seen any supernatural creatures) we intend to reward Omegas, we will lose for certain in the No-mega scenario, and vice versa. But we don't know whether Omegas outnumber No-megas in our universe, so the question "do you intend to reward Omega if/when it appears" is a bead jar guess.

3Vladimir_Nesov14y
The caveat is of course that Counterfactual Mugging or Newcomb Problem are not to be analyzed as situations you encounter in real life: the artificial elements that get introduced are specified explicitly, not by an update from surprising observation. For example, the condition that Omega is trustworthy can't be credibly expected to be observed. The thought experiments explicitly describe the environment you play your part in, and your knowledge about it, the state of things that is much harder to achieve through a sequence of real-life observations, by updating your current knowledge.
0cousin_it14y
I dunno, Newcomb's Problem is often presented as a situation you'd encounter in real life. You're supposed to believe Omega because it played the same game with many other people and didn't make mistakes. In any case I want a decision theory that works on real life scenarios. For example, CDT doesn't get confused by such explosions of counterfactuals, it works perfectly fine "locally". ETA: My argument shows that modifying yourself to never "regret your rationality" (as Eliezer puts it) is impossible, and modifying yourself to "regret your rationality" less rather than more requires elicitation of your prior with humanly impossible accuracy (as you put it). I think this is a big deal, and now we need way more convincing problems that would motivate research into new decision theories.
0Vladimir_Nesov14y
If you do present observations that move the beliefs to represent the thought experiment, it'll work just as well as the magically contrived thought experiment. But the absence of relevant No-megas is part of the setting, so it too should be a conclusion one draws from those observations.
0cousin_it14y
Yes, but you must make the precommitment to love Omegas and hate No-megas (or vice versa) before you receive those observations, because that precommitment of yours is exactly what they're judging. (I think you see that point already, and we're probably arguing about some minor misunderstanding of mine.)
0Vladimir_Nesov14y
You never have to decide in advance, to precommit. Precommitment is useful as a signal to those that can't follow your full thought process, and so you replace it with a simple rule from some point on ("you've already decided"). For Omegas and No-megas, you don't have to precommit, because they can follow any thought process.
0cousin_it14y
I thought about it some more and I think you're either confused somewhere, or misrepresenting your own opinions. To clear things up let's convert the whole problem statement into observational evidence. Scenario 1: Omega appears and gives you convincing proof that Upsilon doesn't exist (and that Omega is trustworthy, etc.), then presents you with CM. Scenario 2: Upsilon appears and gives you convincing proof that Omega doesn't exist, then presents you with anti-CM, taking into account your counterfactual action if you'd seen scenario 1. You wrote: "If you do present observations that move the beliefs to represent the thought experiment, it'll work just as well as the magically contrived thought experiment." Now, I'm not sure what this sentence was supposed to mean, but it seems to imply that you would give up $100 in scenario 1 if faced with it in real life, because receiving the observations would make it "work just as well as the thought experiment". This means you lose in scenario 2. No?
0Vladimir_Nesov14y
Omega would need to convince you that Upsilon not just doesn't exist, but couldn't exist, and that's inconsistent with scenario 2. Otherwise, you haven't moved your beliefs to represent the thought experiment. Upsilon must be actually impossible (less probable) in order for it to be possible for Omega to correctly convince you (without deception). Being updateless, your decision algorithm is only interested in observations so far as they resolve logical uncertainty and say which situations you actually control (again, a sort of logical uncertainty), but observations can't refute logically possible, so they can't make Upsilon impossible if it wasn't already impossible.
0cousin_it14y
No it's not inconsistent. Counterfactual worlds don't have to be identical to the real world. You might as well say that Omega couldn't have simulated you in the counterfactual world where the coin came up heads, because that world is inconsistent with the real world. Do you believe that?
0Vladimir_Nesov14y
By "Upsilon couldn't exist", I mean that Upsilon doesn't live in any of the possible worlds (or only in insignificantly few of them), not that it couldn't appear in the possible world where you are speaking with Omega. The convention is that the possible worlds don't logically contradict each other, so two different outcomes of coin tosses exist in two slightly different worlds, both of which you care about (this situation is not logically inconsistent). If Upsilon lives on such a different possible world, and not on the world with Omega, it doesn't make Upsilon impossible, and so you care what it does. In order to replicate Counterfactual Mugging, you need the possible worlds with Upsilons to be irrelevant, and it doesn't matter that Upsilons are not in the same world as the Omega you are talking to. (How to correctly perform counterfactual reasoning on conditions that are logically inconsistent (such as the possible actions you could make that are not your actual action), or rather how to mathematically understand that reasoning is the septillion dollar question.)
2cousin_it14y
Ah, I see. You're saying Omega must prove to you that your prior made Upsilon less likely than Omega all along. (By the way, this is an interesting way to look at modal logic, I wonder if it's published anywhere.) This is a very tall order for Omega, but it does make the two scenarios logically inconsistent. Unless they involve "deception" - e.g. Omega tweaking the mind of counterfactual-you to believe a false proof. I wonder if the problem still makes sense if this is allowed.
0[anonymous]14y
Sorry, can't parse that, you'd need to unpack more.
3Nisan14y
Whatever our prior for encountering No-mega, it should be counterbalanced by our prior for encountering Yes-mega (who rewards you if you are counterfactually-muggable).
0cousin_it14y
You haven't considered the full extent of the damage. What is your prior over all crazy mind-reading agents that can reward or punish you for arbitrary counterfactual scenarios? How can you be so sure that it will balance in favor of Omega in the end?
1Nisan14y
In fact, I can consider all crazy mind-reading reward/punishment agents at once: For every such hypothetical agent, there is its hypothetical dual, with the opposite behavior with respect to my status as being counterfactually-muggable (the one rewarding what the other punishes, and vice versa). Every such agent is the dual of its own dual; in the universal prior, being approached by an agent is about as likely as being approached by its dual; and I don't think I have any evidence that one agent will be more likely to appear than its dual. Thus, my total expected payoff from these agents is 0. Omega itself does not belong to this class of agent; it has no dual. (ETA: It has a dual, but the dual is a deceptive Omega, which is much less probable than Omega. See below.) So Omega is the only one I should worry about. I should add that I feel a little uneasy because I can't prove that these infinitesimal priors don't dominate everything when the symmetry is broken, especially when the stakes are high.
4cousin_it14y
Why? Can't your definition of dual be applied to Omega? I admit I don't completely understand the argument.
3Nisan14y
Okay, I'll be more explicit: I am considering the class of agents who behave one way if they predict you're muggable and behave another way if they predict you're unmuggable. The dual of an agent behaves exactly the same as the original agent, except the behaviors are reversed. In symbols: * An agent A has two behaviors. * It it predicts you'd give Omega $5, it will exhibit behavior X; otherwise, it will exhibit behavior Y. * The dual agent A* exhibits behavior Y if it predicts you'd give Omega $5, and X otherwise. * A and A* are equally likely in my prior. What about Omega? * Omega has two behaviors. * If it predicts you'd give Omega $5, it will flip a coin and give you $100 on heads; otherwise, nothing. In either case, it will tell you the rules of the game. What would Omega* be? * If Omega predicts you'd give Omega $5, it will do nothing. Otherwise, it will flip a coin and give you $100 on heads. In either case, it will assure you that it is Omega, not Omega. So the dual of Omega is something that looks like Omega but is in fact deceptive. By hypothesis, Omega is trustworthy, so my prior probability of encountering Omega* is negligible compared to meeting Omega. (So yeah, there is a dual of Omega, but it's much less probable than Omega.) Then, when I calculate expected utility, each agent A is balanced by its dual A , but Omega is not balanced by Omega.
0cousin_it14y
If we assume you can tell "deceptive" agents from "non-deceptive" ones and shift probability weight accordingly, then not every agent is balanced by its dual, because some "deceptive" agents probably have "non-deceptive" duals and vice versa. No? (Apologies if I'm misunderstanding - this stuff is slowly getting too complex for me to grasp.)
1Nisan14y
The reason we shift probability weight away from the deceptive Omega is that, in the original problem, we are told that we believe Omega to be non-deceptive. The reasoning goes like this: If it looks like Omega and talks like Omega, then it might be Omega or Omega . But if it were Omega* , then it would be deceiving us, so it's most probably Omega. In the original problem, we have no reason to believe that No-mega and friends are non-deceptive. (But if we did, then yes, the dual of a non-deceptive agent would be deceptive, and so have lower prior probability. This would be a different problem, but it would still have a symmetry: We would have to define a different notion of dual, where the dual of an agent has the reversed behavior and also reverses its claims about its own behavior. What would Omega* be in that case? It would not claim to be Omega. It would truthfully tell you that if it predicted you would not give it $5 on tails, then it would flip a coin and give you $100 on heads; and otherwise it would not give you anything. This has no bearing on your decision in the Omega problem.) Edit: Formatting.
0cousin_it14y
By your definitions, Omega would condition its decision on you being counterfactually muggable by the original Omega, not on you giving money to Omega itself. Or am I losing the plot again? This notion of "duality" seems to be getting more and more complex.
0Nisan14y
"Duality" has become more complex because we're now talking about a more complex problem — a version of Counterfactual Mugging where you believe that all superintelligent agents are trustworthy. The old version of duality suffices for the ordinary Counterfactual Mugging problem. My thesis is that there's always a symmetry in the space of black swans like No-mega. In the case currently under consideration, I'm assuming Omega's spiel goes something like "I just flipped a coin. If it had been heads, I would have predicted what you would do if I had approached you and given my spiel...." Notice the use of first-person pronouns. Omega* would have almost the same spiel verbatim, also using first-person pronouns, and make no reference to Omega. And, being non-deceptive, it would behave the way it says it does. So it wouldn't condition on your being muggable by Omega. You could object to this by claiming that Omega actually says "I am Omega. If Omega had come up to you and said....", in which case I can come up with a third notion of duality.
1cousin_it14y
If Omega* makes no reference to the original Omega, I don't understand why they have "opposite behavior with respect to my status as being counterfactually-muggable" (by the original Omega), which was your reason for inventing "duality" in the first place. I apologize, but at this point it's unclear to me that you actually have a proof of anything. Maybe we can take this discussion to email?
2Jonathan_Graehl14y
Surely the last thing on anyone's mind, having been persuaded they're in the presence of Omega in real life, is whether or not to give $100 :) I like the No-mega idea (it's similar to a refutation of Pascal's wager by invoking contrary gods), but I wouldn't raise my expectation for the number of No-mega encounters I'll have by very much upon encountering a solitary Omega. Generalizing No-mega to include all sorts of variants that reward stupid or perverse behavior (are there more possible God-likes that reward things strange and alien to us?), I'm not in the least bit concerned. I suppose it's just a good argument not to make plans for your life on the basis of imagined God-like beings. There should be as many gods who, when pleased with your action, intervene in your life in a way you would not consider pleasant, and are pleased at things you'd consider arbitrary, as those who have similar values they'd like us to express, and/or actually reward us copacetically.
2cousin_it14y
You don't have to. Both Omega and No-mega decide based on what your intentions were before seeing any supernatural creatures. If right now you say "I would give money to Omega if I met one" - factoring in all belief adjustments you would make upon seeing it - then you should say the reverse about No-mega, and vice versa. ETA: Listen, I just had a funny idea. Now that we have this nifty weapon of "exploding counterfactuals", why not apply it to Newcomb's Problem too? It's an improbable enough scenario that we can make up a similarly improbable No-mega that would reward you for counterfactual two-boxing. Damn, this technique is too powerful!
0Jonathan_Graehl14y
By not believing No-mega is probable just because I saw an Omega, I mean that I plan on considering such situations as they arise on the basis that only the types of godlike beings I've seen to date (so far, none) exist. I'm inclined to say that I'll decide in the way that makes me happiest, provided I believe that the godlike being is honest and really can know my precommitment. I realize this leaves me vulnerable to the first godlike huckster offering me a decent exclusive deal; I guess this implies that I think I'm much more likely to encounter 1 godlike being than many.

I would have thought everyone here would have seen this by now, but I hadn't until today so it may be new to someone else as well:

Charlie Munger on the 24 Standard Causes of Human Misjudgment

http://freebsd.zaks.com/news/msg-1151459306-41182-0/

Thought I might pass this along and file it under "failure of rationality". Sadly, this kind of thing is increasingly common -- getting deep in education debt, but not having increased earning power to service the debt, even with a degree from a respected university.

Summary: Cortney Munna, 26, went $100K into debt to get worthless degrees and is deferring payment even longer, making interest pile up further. She works in an unrelated area (photography) for $22/hour, and it doesn't sound like she has a lot of job security.

We don't find out until... (read more)

1NancyLebovitz14y
Do you mean young people with unrepayable college debt, or young people with unrepayable debt for degrees which were totally unlikely to be of any use?
0SilasBarta14y
What's the substantive difference? In both cases, the young person has taken out a debt intended to amplify earnings by more than the debt costs, but that isn't going to happen. What does it matter whether the degree was of "any use" or not? What matters is whether it was enough use to cover the debt, not simply if there exist some gain in earnings due to the debt (which there probably is, though only via signaling, not direct enhancement of human capital).
2NancyLebovitz14y
I was making a distinction between extreme bad judgment (as shown in the article) and moderately bad judgment and/or bad luck. Your emphasis upthread seemed to be on how foolish that woman and her family were.
1Seth_Goldin14y
Arnold Kling has some thoughts about the plight of the unskilled college grad. 1 2
2SilasBarta14y
Thanks for the links, I had missed those. I agree with his broad points, but on many issues, I notice he often perceives a world that I don't seem to live in. For example, he says that people who can simply communicate in clear English and think clearly are in such short supply that he'd hire someone or take them on as a grad student simply for meeting that, while I haven't noticed the demand for my labor (as someone well above and beyond that) being like what that kind of shortage would imply. Second, he seems to have this belief that the consumer credit scoring system can do no wrong. Back when I was unable to get a mortgage at prime rates due to lacking credit history despite being an ideal candidate [1], he claimed that the refusals were completely justified because I must have been irresponsible with credit (despite not having borrowed...), and he has no reason to believe my self-serving story ... even after I offered to send him my credit report and the refusals! [1] I had no other debts, no dependents, no bad incidents on my credit report, stable work history from the largest private employer in the area, and the mortgage would be for less than 2x my income and have less than 1/6 of my gross in monthly payments. Yeah, real subprime borrower there...

One reason why the behavior of corporations and other large organizations often seems so irrational from an ordinary person's perspective is that they operate in a legal minefield. Dodging the constant threats of lawsuits and regulatory penalties while still managing to do productive work and turn a profit can require policies that would make no sense at all without these artificially imposed constraints. This frequently comes off as sheer irrationality to common people, who tend to imagine that big businesses operate under a far more laissez-faire regime than they actually do.

Moreover, there is the problem of diseconomies of scale. Ordinary common-sense decision criteria -- such as e.g. looking at your life history as you describe it and concluding that, given these facts, you're likely to be a responsible borrower -- often don't scale beyond individuals and small groups. In a very large organization, decision criteria must instead be bureaucratic and formalized in a way that can be, with reasonable cost, brought under tight control to avoid widespread misbehavior. For this reason, scalable bureaucratic decision-making rules must be clear, simple, and based on strictly defined ca... (read more)

1NancyLebovitz14y
As nearly as I can figure it, people who rely on credit ratings mostly want to avoid loss, but aren't very concerned about missing chances to make good loans.

For what it's worth, the credit score system makes a lot more sense when you realize it's not about evaluating "this person's ability to repay debt", but rather "expected profit for lending this person money at interest".

Someone who avoids carrying debt (e.g., paying interest) is not a good revenue source any more than someone who fails to pay entirely. The ideal lendee is someone who reliably and consistently makes payment with a maximal interest/principal ratio.

This is another one of those Hanson-esque "X is not about X-ing" things.

3NancyLebovitz14y
I think there's also some Conservation of Thought (1) involved-- if you have a credit history to be looked at, there are Actual! Records!. If someone is just solvent and reliable and has a good job, then you have to evaluate that. There may also be a weirdness factor if relatively few people have no debt history. (1) Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed is partly about how a lot of what looks like tyranny when you're on the receiving end of it is motivated by the people in charge's desire to simplify your behavior enough to keep track of you and control you.
4JGWeissman14y
Simplifying my behavior enough to keep track of me and control me is tyranny.
3SilasBarta14y
Except that there are records (history of paying bills, rent), it's just that the lenders won't look at them. Maybe financial gurus should think about that before they say "stay away from credit cards entirely". It should be "You MUST get a credit card, but pay the balance." (This is another case of addictive stuff that can't addict me.) (Please, don't bother with advice, the problem has since been solved; credit unions are run by non-idiots, it seems, and don't make the above lender errors.) ETA: Sorry for the snarky tone; your points are valid, I just disagree about their applicability to this specific situation.
8Vladimir_M14y
SilasBarta: Well, is it really possible that lenders are so stupid that they're missing profit opportunities because such straightforward ideas don't occur to them? I would say that lacking insider information on the way they do business, the rational conclusion would be that, for whatever reasons, either they are not permitted to use these criteria, or these criteria would not be so good after all if applied on a large scale. (See my above comment for an elaboration on this topic.) Or maybe the reason is that credit unions are operating under different legal constraints and, being smaller, they can afford to use less tightly formalized decision-making rules?
4SilasBarta14y
No, they do require that information to get the subprime loan; it's just that they classified me as subprime based purely on the lack of credit history, irrespective of that non-loan history. Providing that information, though required, doesn't get you back into prime territory. Considering that in the recent financial industry crisis, the credit unions virtually never needed a bailout, while most of the large banks did, there is good support for the hypothesis of CU = non-idiot, larger banks/mortgage brokers = idiot. (Of course, I do differ from the general subprime population in that if I see that I can only get bad terms on a mortgage, I don't accept them.)
4Vladimir_M14y
SilasBarta: This merely means that their formal criteria for sorting out loan applicants into officially recognized categories disallow the use of this information -- which would be fully consistent with my propositions from the above comments. Mortgage lending, especially subprime lending, has been a highly politicized issue in the U.S. for many years, and this business presents an especially dense and dangerous legal minefield. Multifarious politicians, bureaucrats, courts, and prominent activists have a stake in that game, and they have all been using whatever means are at their disposal to influence the major lenders, whether by carrots or by sticks. All this has undoubtedly influenced the rules under which loans are handed out in practice, making the bureaucratic rules and procedures of large lenders seem even more nonsensical from the common person's perspective than they would otherwise be. (I won't get into too many specifics in order to avoid raising controversial political topics, but I think my point should be clear at least in the abstract, even if we disagree about the concrete details.) Why do you assume that the bailouts are indicative of idiocy? You seem to be assuming that -- roughly speaking -- the major financiers have been engaged in more or less regular market-economy business and done a bad job due to stupidity and incompetence. That, however, is a highly inaccurate model of how the modern financial industry operates and its relationship with various branches of the government -- inaccurate to the point of uselessness.
1SilasBarta14y
I actually agree with most of those points, and I've made many such criticisms myself. So perhaps larger banks are forced into a position where they rely too much on credit scores at one stage. Still, credit unions won, despite having much less political pull, while significantly larger banks toppled. Much as I disagree with the policies you've described, some of the banks' errors (like assumptions about repayment rates) were bad, no matter what government policy is. If lending had really been regulated to the point of (expected) unprofitability, they could have gotten out of the business entirely, perhaps spinning off mortgage divisions as credit unions to take advantage of those laws. Instead, they used their political power to "dance with the devil", never adjusting for the resulting risks, either political or in real estate. There's stupidity in that somewhere.
6mattnewport14y
In some cases this was an example of the principal–agent problem - the interests of bank employees were not necessarily aligned with the interests of the shareholders. Bank executives can 'win' even when their bank topples.
0RobinZ14y
The principal-agent problem should always be on the list of candidates, but it can occasionally be eliminated as an explanation. I was listening to the This American Life episode "Return to the Giant Pool of Money", and more than one of the agents in the chain had large amounts of their resources wiped out.
1mattnewport14y
The question of whether an agent's interests are aligned with the principal's is largely orthogonal to the question of whether the agent achieves a positive return. The agent's expected return is more relevant.
0RobinZ14y
There were many agents involved in the recent financial unpleasantness whose harm was enabled by the principal-agent problem. My intended examples did not suffer that problem. I could have made that clearer.
1Douglas_Knight14y
These are not such different answers. Working on a large scale tends to require hiring (potentially) stupid people and giving them little flexibility.
1Vladimir_M14y
Yes, that's certainly true. In fact, what you say is very similar to one of the points I made in my first comment in this thread (see its second paragraph).
3NancyLebovitz14y
Fair point. This does replicate the Conservation of Thought theme. I think a good bit about business can be explained as not bothering because one's competitors haven't bothered either. I've seen financial gurus recommend getting a credit card and paying the balance. And thanks for the ETA.
4mattnewport14y
Ramit Sethi for example. I had the impression that this was actually pretty much the standard advice from personal finance experts. Most of them are not worth listening to anyway though.
1SilasBarta14y
This might be what they say in their books, where they give a detailed financial plan, though I doubt even that. What they advise is usually directed at the average mouthbreather who gets deep into credit card debt. They don'd need to advise such people to build a credit history by getting a credit card solely for that purpose -- that ship has already said! All I ever hear from them is "Stay away from credit cards entirely! Those are a trap!" I had never once heard a caveat about, "oh, but make sure to get one anyway so you don't find yourself at 24 without a credit history, just pay the balance." No, for most of what they say to make sense, you have to start from the assumption that the listener typically doesn't pay the full balance, and is somehow enlightened by moving to such a policy. Notice how the citation you give is from a chapter-length treatment from a less-known finance guru (than Ramsey, Orman, Howard, etc.), and it's about "optimizing credit cards" a kind of complex, niche strategy. Not standard, general advice from a household name.
1Blueberry14y
That would be an insanely stupid thing for anyone to say. Credit cards are very useful if used properly. I agree with mattnewport that the standard advice given in financial books is to charge a small amount every month to build up a credit rating. Also, charge large purchases at the best interest rate you can find when you'll use the purchases over time and you have a budget that will allow you to pay them off.
1SilasBarta14y
Well, then I don't know what to tell you. I'd listened to financial advice shows on and off and had read Clark Howard's book before applying for the mortgage back then, and never once did I hear or read that you should get a credit card merely to establish a credit history (and this is not why they issue them). I suspect it's because their advice begins from the assumption that you're in credit card debt, and you need to get out of that first, "you bozo". And your comment about the usefulness of credit cards for borrowing is a bit ivory-tower. In actual experience, based on all the expose reports and news stories I've seen, it's pretty much impossible to do that kind of planning, since credit card companies reserve the right to make arbitrary changes to the terms -- and use that right. I remember one case where a bank issued a card that had a "guaranteed" 1.9% rate for ~6 months with a ~$5000 limit -- but if you actually used anything approaching that limit, they would invoke the credit risk clauses of the agreement, deem you a high risk because of all the debt you're carrying, and jack up your rate to over 20%. So, a 1.9% loan that they can immediately change to 20% if they feel like it -- in what sense was it a 1.9% loan? For that reason, I don't even consider using a credit card for installment purchases.
0Blueberry14y
Wow, they can jack up the rate like that? I would definitely consider that fraud and abuse. That's not common, however, and Congress recently passed legislation to prevent that sort of abuse. Currently, I don't have the option of not using a credit card; I would starve to death without it.
4SilasBarta14y
I thought so too, but then was overwhelmed with stories like that. Most credit cards agreements are written with a clause that says, "we can do whatever we want, and the most you can do to reject the new terms is pay off the entire debt in 15 days". This is one of the few instances where courts will honor a contract that gives one party such open-ended power over the other. If you haven't been burned this way, it's just a matter of time. And if you google the topic, I'm sure you'll find enough to satisfy your evidence threshold. Would you starve to death with it? If you can service the debts, let me loan you the money; at this point, most investors would sell out their mother to get a fraction of the interest rate on their savings that most credit cards charge. (Not that I would, but I'd turn down the offer without my trademark rudeness...)
0CronoDAS14y
::followed link:: Did you ever experience nicotine withdrawal symptoms? In people who aren't long-time smokers, they can take up to a week to appear.
2Vladimir_M14y
For what that's worth, when I quit smoking, I didn't feel any withdrawal symptoms except being a bit nervous and irritable for a single day (and I'm not even sure if quitting was the cause, since it coincided with some stressful issues at work that could well have caused it regardless). That was after a few years of smoking something like two packs a week on average (and much more than that during holidays and other periods when I went out a lot). From my experience, as well as what I observed from several people I know very well, most of what is nowadays widely believed about addiction is a myth.
0SilasBarta14y
No, never did. My best guess is that I didn't smoke heavily enough to get a real addiction, though I smoked enough to get the psychoactive effects.
3Kevin14y
Yes, I would think it would take around 5-10 cigarettes a day (or more) for at least a week to develop an addiction. While cigarettes (and heroin, and caffeine) are very physically addictive, it still takes sustained, moderately high use to develop a physical addiction. Most cigarette smokers describe their addictions in terms of "x packs per day".
1SilasBarta14y
Okay, then I guess my case isn't informative ... I'd use the pack/year metric instead instead of the pack/day.
0CronoDAS14y
I wish I could direct you to this Scientific American article so I could ask how it compares to your experiences, but it's behind a paywall.
0SilasBarta14y
From what I can see before the paywall, it looks like I definitely didn't meet the threshold under the best science, but I could probably cross it from 5 cigarettes per day. I'd only try that out if I were rewarded for doing it (but not for stopping as that would defeat the purpose of such an experience).
9CronoDAS14y
I read the article on paper before it was hidden in a paywall, so I can summarize some of the findings: 1) Rat brains are irrevocably changed by a single dose of nicotine. 2) Brains of rats that have never been exposed to nicotine ("non-smokers"), those that are currently given nicotine on a regular basis ("current smokers"), and those that used to be given nicotine on a regular basis but have been deprived of it for a long time ("former smokers") are all distinguishable from each other. 3) The author notes that the primary effect of nicotine on addicted human smokers appears to be suppressing craving for itself. 4) The author hypothesizes that the brain has a craving-generating system and a separate craving-suppression system. (These systems apply to appetites in general, such as the desire to eat food.) He further goes on to speculate that the primary action of nicotine is to suppress craving. This has the effect of throwing the two systems out of equilibrium, so the brain's craving-generation system "works harder" to counter the effects of nicotine. When the effects of nicotine wear off (which can take much longer than the time it takes for the nicotine to leave the body), the equilibrium is once again thrown out of balance, resulting in cravings. (The effects of smoking on weight are mentioned as support for this hypothesis.)
2Douglas_Knight14y
Expected profit explains much behavior of credit card companies, but I don't think it helps at all with the behavior of the credit score system or mortgage lenders (Silas's example!). Nancy's answer looks much better to me (except her use of the word "also").

(Wherein I seek advice on what may be a fairly important decision.)

Within the next week, I'll most likely be offered a summer job where the primary project will be porting a space weather modeling group's simulation code to the GPU platform. (This would enable them to start doing predictive modeling of solar storms, which are increasingly having a big economic impact via disruptions to power grids and communications systems.) If I don't take the job, the group's efforts to take advantage of GPU computing will likely be delayed by another year or two. Th... (read more)

7orthonormal14y
The amount you could slow down Moore's Law by any strategy is minuscule compared to the amount you can contribute to FAI progress if you choose. It's like feeling guilty over not recycling a paper cup, when you're planning to become a lobbyist for an environmentalist group later.
7NaN14y
Uninformed opinion: space weather modelling doesn't seem like a huge market, especially when you compare it to the truly massive gaming market. I doubt the increase in demand would be significant, and if what you're worried about is rate of growth, it seems like delaying it a couple of years would be wholly insignificant.
5Kaj_Sotala14y
I would say that there seem to be a lot of companies that are in one way or another trying to advance Moore's law. For as long as it doesn't seem like the one you're working on has a truly revolutionary advantage as compared to the other companies, just taking the money but donating a large portion of it to existential risk reduction is probably an okay move. (Full disclosure: I'm an SIAI Visiting Fellow so they're paying my upkeep right now.)
4Roko14y
Personally trying to slow Moore's Law down is the kind of foolishness that Eliezer seems to inspire in young people...
1university_student14y
Do you mean that he actively seeks to encourage young people to try and slow Moore's Law, or that this is an unintentional consequence of his writings on AI risk topics?
2JoshuaZ14y
I'm pretty sure that Roko means the second. If this idea got mentioned to Eliezer I'm pretty sure he'd point out the minimal impact that any single human can have on this, even before one gets to whether or not it is a good idea.
-8rwallace14y

Should we buy insurance at all?

There is a small remark in Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making about insurance saying that all insurance has negative expected utility, we pay too high a price for too little a risk, otherwise insurance companies would go bankrupt. If this is the case should we get rid of all our insurances? If not, why not?

There is a small remark in Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making about insurance saying that all insurance has negative expected utility, we pay too high a price for too little a risk, otherwise insurance companies would go bankrupt.

No -- Insurance has negative expected monetary return, which is not the same as expected utility. If your utility function obeys the law of diminishing marginal utility, then it also obeys the law of increasing marginal disutility. So, for example, losing 10x will be more than ten times as bad as losing x. (Just as gaining 10x is less than ten times as good as gaining x.)

Therefore, on your utility curve, a guaranteed loss of x can be better than a 1/1000 chance of losing 1000x.

ETA: If it helps, look at a logarithmic curve and treat it as your utility as a function of some quantity. Such a curve obeys diminishing marginal utility. At any given point, your utility increases less than proportionally going up, but more than proportionally going down.

(Incidentally, I acutally wrote an embarrasing article arguing in favor of the thesis roland presents, and you can still probably find on it the internet.... (read more)

1mkehrt14y
I voted this up, but I want to comment to point out that this is a really important point. Don't be tricked into not getting insurance just because it has a negative expected monetary value.
3mattnewport14y
I voted Silas up as well because it's an important point but it shouldn't be taken as a general reason to buy as much insurance as possible (I doubt Silas intended it that way either). Jonathan_Graehl's point that you should self-insure if you can afford to and only take insurance for risks you cannot afford to self-insure is probably the right balance. Personally I don't directly pay for any insurance. I live in Canada (universal health coverage) and have extended health insurance through work (much to my dismay I cannot decline it in favor of cash) which means I have far more health insurance than I would purchase with my own money. Given my aversion to paperwork I don't even fully use what I have. I do not own a house or a car which are the other two areas arguably worth insuring. I don't have dependents so have no need for life or disability coverage. All other forms of insurance fall into the 'self-insure' category for me given my relatively low risk aversion.
8RobinZ14y
Risk is more expensive when you have a smaller bankroll. Many slot machines actually offer positive expected value payouts - they make their return on people plowing their winnings back in until they go broke.
6Douglas_Knight14y
Citation please? A cursory search suggests that machines go through +EV phases, just like blackjack, but that individual machines are -EV. It's not just that they expect people to plow the money back in, but that pros have to wait for fish to plow money in to get to the +EV situation. The difference with blackjack is that you can (in theory) adjust your bet to take advantage of the different phases of blackjack. Your first sentence seems to match Roland's comment about the Kelly criterion (you lose betting against snake eyes if you bet your whole bankroll every time), but that doesn't make sense with fixed-bet slots. There, if it made sense to make the first bet, it makes sense to continuing betting after a jackpot.
3Dagon14y
This comes up frequently in gambling and statistics circles. "Citation please" is the correct response - casinos do NOT expect to make a profit by offering losing (for them) bets and letting "gambler's ruin" pay them off. It just doesn't work that way. The fact that a +moneyEV bet can be -utilityEV for a gambler does NOT imply that a -moneyEV bet can be +utilityEV for the casino. It's -utility for both participants. The only reason casinos offer such bets ever is for promotional reasons, and they hope to make the money back on different wagers the gambler will make while there. The Kelly calculations work just fine for all these bets - for cyclic bets, it ends up you should bet 0 when -EV. When +EV, bet some fraction of your bankroll that maximizes mean-log-outcome for each wager.
1CronoDAS14y
Some casinos advertise that they have slots with "up to" a 101% rate of return. Good luck finding the one machine in the casino that actually has a positive EV, though!
1RobinZ14y
On the scale from "saw it in The Da Vinci Code" to "saw it in Nature", I'd have to say all I have is an anecdote from a respectable blogger: I'll give you that "many" is almost certainly flat wrong, on reflection, but such machines are (were?) probably out there.
8SilasBarta14y
That move was full of falsehoods. For example, people named Silas are actually no more or less likely than the general population to be tall homicidal albino monks -- but you wouldn't guess that from seeing the movie, now, would you?
2RobinZ14y
That's why it represents the bottom end of my "source-reliability" scale.
5bentarm14y
The only relevant part of the quote seems to be: I'm pretty sure it's not that unlikely to come up ahead 'three or four' times when playing slot machines (if it weren't so late I'd actually do the sums). It seems much more plausible that the blog author was just lucky than that the machines were actually set to regularly pay out positive amounts.
5roland14y
Ahh, Kelly criterion, correct?
1RobinZ14y
... *looks up Kelly criterion* That's definitely a related result. (So related, in fact, that thinking about the +EV slots the other day got me wondering what the optimal fraction of your wealth was to bid on an arbitrary bet - which, of course, is just the Kelly criterion.)
4gwern14y
I'd like to pose a related question. Why is insurance structured as up-front payments and unlimited coverage, and not as conditional loans? For example, one could imagine car insurance as a options contract (or perhaps a futures) where if your car is totaled, you get a loan sufficient for replacement. One then pays off the loan with interest. The person buying this form of insurance makes fewer payments upfront, reducing their opportunity costs and also the risk of letting nsurance lapse due to random fluctuations. The entity selling this form of insurance reduces the risk of moral hazard (ie. someone taking out insurance, torching their car, and then letting insurance lapse the next month). Except in assuming strange consumer preferences or irrationality, I don't see any obvious reason why this form of insurance isn't superior to the usual kind.
5Vladimir_M14y
Well, look at a more extreme example. Imagine an accident in which you not just total a car, but you're also on the hook for a large bill in medical costs, and there's no way you can afford to pay this bill even if it's transmuted into a loan with very favorable terms. With ordinary insurance, you're off the hook even in this situation -- except possibly for the increased future insurance costs now that the accident is on your record, which you'll still likely be able to afford. The goal of insurance is to transfer money from a large mass of people to a minority that happens to be struck by an improbable catastrophic event (with the insurer taking a share as the transaction-facilitating middleman, of course). Thus a small possibility of a catastrophic cost is transmuted into the certainty of a bearable cost. This wouldn't be possible if instead of getting you off the hook, the insurer burdened you with an immense debt in case of disaster. (A corollary of this observation is that the notion of "health insurance" is one of the worst misnomers to ever enter public circulation.)
2gwern14y
Alright, so this might not work for medical disasters late in life, things that directly affect future earning power. (Some of those could be handled by savings made possible by not having to make insurance payments.) But that's just one small area of insurance. You've got housing, cars, unemployment, and this is just what comes to mind for consumers, never mind all the corporate or business need for insurance. Are all of those entities buying insurance really not in a position to repay a loan after a catastrophe's occurrence? Even nigh-immortal institutions?
3Vladimir_M14y
I wouldn't say that the scenarios I described are "just one small area of insurance." Most things for which people buy insurance fit under that pattern -- for a small to moderate price, you buy the right to claim a large sum that saves you, or at least alleviates your position, if an improbable ruinous event occurs. (Or, in the specific case of life insurance, that sum is supposed to alleviate the position of others you care about who would suffer if you die unexpectedly.) However, it should also be noted that the role of insurance companies is not limited to risk pooling. Since in case of disaster the burden falls on them, they also specialize in specific forms of damage control (e.g. by aggressive lawyering, and generally by having non-trivial knowledge on how to make the best out specific bad situations). Therefore, the expected benefit from insurance might actually be higher than the cost even regardless of risk aversion. Of course, insurers could play the same role within your proposed emergency loan scheme. It could also be that certain forms of insurance are mandated by regulations even when it comes to institutions large enough that they'd be better off pooling their own risk, or that you're not allowed to do certain types of transactions except under the official guise of "insurance." I'd be surprised if the modern infinitely complex mazes of business regulation don't give rise to at least some such situations. Moreover, there is also the confusion caused by the fact that governments like to give the name of "insurance" to various programs that have little or nothing to do with actuarial risk, and in fact represent more or less pure transfer schemes. (I'm not trying to open a discussion about the merits of such schemes; I'm merely noting that they, as a matter of fact, aren't based on risk pooling that is the basis of insurance in the true sense of the term.)
2gwern14y
Intrinsically, the average person must pay in more than they get out. Otherwise the insurance company would go bankrupt. No reason a loan style insurance company couldn't do the exact same thing. 'Rent-seeking' and 'regulatory capture' are certainly good answers to the question why doesn't this exist.
2Nick_Tarleton14y
For one thing, insurance makes expenses more predictable; though the desire for predictability (in order to budget, or the like) does probably indicate irrationality and/or bounded rationality.
-1gwern14y
What's unpredictable about a loan? You can predict what you'll be paying pretty darn precisely, and there's no intrinsic reason that your monthly loan repayments would have to be higher than your insurance pre-payments.
3Nick_Tarleton14y
You can't predict when you'll have to start paying.
0[anonymous]14y
It's not predictable when you'll have to start making payments.
1Jonathan_Graehl14y
Obviously if you know your utility function and the true distribution of possible risks, it's easy to decide whether to take a particular insurance deal. The standard advice is that if you can afford to self-insure, you should, for the reason you cite (that insurance companies make a profit, on average). That's a heuristic that holds up fine except when you know (for reasons you will keep secret from insurers) your own risk is higher than they could expect; then, depending on how competitive insurers are, even if you're not too risk-averse, you might find a good deal, even to the extent that you turn an expected (discounted) profit, and so should buy it even if you have zero risk aversion. Apparently in California, auto insurers are required to publish the algorithm by which they assign premiums (and are possibly prohibited from using certain types of information). Conversely, you may choose to have no insurance (or extremely high deductible) in cases where you believe your personal risk is far below what the insurer appears to believe, even when you're actually averse to that risk. Of course, it's not sufficient to know how wrong the insurer's estimate of your risk is; they insist on a pretty wide vig - not just to survive both uncertainties in their estimation of risk and the market returns on the float, but also to compensate for the observed amount of successful adverse selection that results from people applying the above heuristic. I suppose it may also be possible that the insurer won't pay. I don't know what exactly what guarantees we have in the U.S.
1Douglas_Knight14y
Actually, I think that for voluntary insurance, the observed adverse selection is negative, but I can't find the cite. People simply don't do cost-benefit calculations. People who buy insurance are those who are terribly risk-averse or see it as part of their role. Such people tend to be more careful than the general population. In a competitive market, the price of insurance would be bid down to reflect this, but it isn't.
0JamesAndrix14y
We should form large nonprofit risk pools.
0[anonymous]14y
Some insurances are not worth getting, obviously. Like insurance on laptops or music players. But that insurance in general has negative expected utility assumes no risk aversion. If you can handle the risks on your own - if you are effectively self-insuring - then you probably should do that. But a house burning down or getting a rare cancer that will cost millions to treat: these are not self-insurable things unless you are a millionaire.

Guided by Parasites: Toxoplasma Modified Humans

a ~20 minute (absolutely worth every minute) interview with, Dr. Robert Sapolsky, a leading researcher in the study of Toxoplasma & its effects on humans. This is a must see. Also, towards the end there is discussion of the effect of stress on telomere shortening. Fascinating stuff.

2NancyLebovitz14y
Thanks for the link. If people's desires are influenced by parasites, what does that do to CEV?
6Blueberry14y
If your desires are influenced by parasites, then the parasites are part of what makes you you. You may as well ask "If people's desires are influenced by their past experience, what does that do to CEV?" or "If people's desires are influenced by their brain chemistry, what does that do to CEV?"
9Alexandros14y
So what if Dr. Evil releases a parasite that rewires humanity's brains in a predetermined manner? Should CEV take that into account or should it aim to become Coherent Extrapolated Disinfected Volition?
5cupholder14y
What if Dr. Evil publishes a book or makes a movie that rewires humanity's brains in a predetermined manner?
2Alexandros14y
Yep, I made a reference to cultural influence here. That's why I suspect CEV should be applied uniformly to the identity-space of all possible humans rather than the subset of humans that happen to exist when it gets applied. In that case defining humanity becomes very, very important. Of course, perhaps the current formulation of CEV covers the entire identity-space equally and treats the living population as a sample, and I have misunderstood. But if that is the case, Wei Dai's last article is also bunk, and I trust him to have better understanding of all things FAI than myself.
3cupholder14y
Heh - my first instinct is to bite the bullet and apply CEV to existing humans only. I couldn't give a strong argument for that, though; I just can't immediately think of a reason to exclude non-culturally influenced humans while including culturally influenced humans.
2NancyLebovitz14y
It's hard to tell what counts as an influence and what doesn't. It would be interesting to see what would happen if the effects of parasites could be identified and reversed. The results wouldn't necessarily all be good, though.
0Alexandros14y
I am not sure I follow your last sentence. Can you elaborate?
2cupholder14y
I'll give it a try. A human's mind and preferences might be influenced by cultural things like books and TV, and they might be influenced by non-cultural things like parasites. (And of course a lot of people will be influenced by both.) I can't think of a reason to include the former in CEV and exclude the latter that feels non-arbitrary to me, so I don't feel as if parasitically modified brains warrant different treatment, such as altering CEV to cover the space of all possible humans. My gut evaluates the prospect of parasite-driven brains as just another kind of human brain. (I'm presuming as well that CEV as currently formulated is just meant to cover existing humans, not all possible humans.) That makes me content to apply CEV to existing humans only - I don't feel I have to try to account for brain changes due to culture or parasites or what have you by expanding it to incorporate all of brain space.
3Blueberry14y
You may as well ask: "What if Dr. Evil kills every other living organism? Should CEV take that into account or should it aim to become Coherent Extrapolated Resurrected Volition?" Of course, if someone modifies or kills all the other humans, that will change the result of CEV. Garbage in, garbage out.

I'm not certain this comment will be coherent, but I would like to compose it before I lose my train of thought. (I'm in an atypical mental state, so I easily could forget the pieces when feeling more normal.) The writing below sounds rather choppy and emphatic, but I'm actually feeling neutral and unconvinced. I wonder if anyone would be able to 'catch this train' and steer it somewhere else perhaps..?

It's an argument for dualism. Here is some background:


I've always been a monist: believing that everything should be coherent from within this reality. Th... (read more)

2ata14y
I don't see where dualism comes in. Specifically what kind of dualism are you talking about? ---------------------------------------- A problem being unsolvable within some system does not imply that there is some outer system where it can be solved. Take the Halting Problem, for example: there are programs such that we cannot prove whether or not they will never halt, and this itself is provable. Yet there is a right answer in any given instance — a program will halt or it won't — but we can never know in some cases. That you say "I cannot understand what the answer to the problem could possibly be" suggests that it is a wrong question. Ask "Why do I think the universe exists?" instead of "Why does the universe exist?". I have my tentatively preferred answer to that, but maybe you will come up with something interesting.
2Blueberry14y
What is it?
0byrnema14y
Agreed, I was imprecise before. It is not generally 'a problem' if something is unknown. In the case of the halting problem, it's OK if the algorithm doesn't know when it is going to halt. (This doesn't make it incomplete.) However, it is a problem if X doesn't know how X was created (this makes X incomplete.) The difference is that an algorithm can be implemeted -- and fully aware of how it is implemented, and know every line of its own code -- without knowing where it is going to halt. Where it's going to halt isn't squirreled away in some other domain to be read at the right moment, the rules for halting are known by the algorithm, it just doesn't know when those rules will be satisfied. In contrast, X could not have created itself without any source code to do so. The analogous situation would be an algorithm that has halted but doesn't know why it halted. If it cannot know through self-inspection why it halted, then it is incomplete: it must deduce that something outside itself caused it to halt.
0byrnema14y
I agree that when a question doesn't have any possibility of an answer, it's probably a wrong question. But in this case, I don't see how it could be a wrong question. It seems like a perfectly reasonable question that we've gotten habituated to not having an answer to. It's evidence -- if we were looking for evidence -- that X is incomplete and we are in a simulation. We take a lot of store in the convenient fact that our reality is causal. So why can't we ask what caused reality? No, I don't come up with anything. I feel like anything that a person could possibly come up with would be philosophy (a non-scientific answer outside X). But please do share your answer (even if it is philosophy, as I expect). (By dualism, I mean that there are aspects of reality we interact with beyond science, so that physical materialism or scientism, etc., would be incomplete epistemologies.)
0ata14y
Here's where I stated it most recently, and I wrote an earlier post getting at the same sort of thing (where I see you posted a few comments), but at this point I've decided to abstain from actually advocating it until I have a better handle on some of the currently-unanswered questions raised by it. At the same time, I do feel like this line of reasoning (the conclusion I like to sum up as "Existence is what mathematical possibility feels like from the inside") is a step in the right direction. I do realize now that it is not as complete a solution as I originally thought — it makes me feel less confused about existence, but newly confused about other things — but I do still have the sense that the ultimately correct explanation of existence will not specially privilege this reality over others, and that our mental algorithms regarding "existence" are leading us astray. That seems to be the only state of affairs that does not compel us to believe in an infinite regress of causality, which doesn't really seem to explain anything, if it even makes logical sense. In any case, although I definitely have to concede that this problem is not solved, I am not convinced that it is not solvable. Metaphysical cosmology has been one of the most difficult areas of philosophy to turn into science or math, but it may yet fall. Alright, that's what threw me off. I think "dualism" is usually used to refer specifically to theories that postulate ontologically-basic mental substances or properties separate from normal physical interactions; not that "there are aspects of reality we interact with beyond science", but that our consciousness or minds are made of something beyond science. Your reasoning does not imply the latter, correct?
0byrnema14y
Oh, that was you. I think the Ultimate Ensemble idea is really appealing as an explanation of what existence is. (The way possibility feels from the inside, as you wrote.)
0[anonymous]14y
My answer to those questions should be the same. The process of answering either question should bring the two into line even if they were previously cached somewhat differently.
0Blueberry14y
By "problem of existence" you mean why we exist and how we came to exist? Why do you think that can't be answered within our world? And what do you think a world would look like if you could solve the problem in it?
0byrnema14y
Yes. Why and how anything exists, and what existence is. The reason that I think this problem can't be answered within our world is that the lack of an answer doesn't seem to be a matter of lack of information. It's a unique question in that although it seems to be a reasonable question, there's no possibility of an answer to this question, not even a false one. It's a reasonable question because X is a causal reality, so it is reasonable to ask what caused X. There's no possibility of an answer to the question because causality is an arrow that always requires a point of departure. If you say the universe was created by a spark, and the rest followed by mathematics and logical necessity, still, what created that spark? Religions have creation stories, but they explain the creation of X by the creation of X outside X. So creation stories don't resolve the conundrum of creation, they just move creation to someplace outside experience, where we cannot expect to understand anything. This may represent a universal insight that the existence of X cannot be explained within X. This is analogous to being in flatland and wondering about edges. I suppose the main mysterious thing about the larger universe Y would be acausality. Here within X, it seems to be a rule, if not a logical principle, that everything is determined by something else. If something were to happen spontaneously, how did it decide to? What is the rule or pattern for its spontaneous appearance? These are all reasonable questions within X. Somehow Y gets around them.
0Blueberry14y
What do you think of the following answer? There is some evidence that backward time travel may be possible under some circumstances in a way that is compatible with general relativity. So suppose, many years in the future, a team of physicists and engineers creates a wormhole in the universe and sends something back to the time of the Big Bang, causing it and creating our universe. That way, it's all self-contained.
0byrnema14y
Self-contained is good, though it doesn't resolve the existence problem. (What is the appropriate cliché there ... you can't pull yourself out of quicksand by pulling on your boots?) Backward time travel itself opens up a number of wonderful possibilities, including universe self-reflection and the possibility of a post-hoc framework of objective value.
0wedrifid14y
It also makes encryption more difficult!

In Harry Potter and the Methods of Rationality, Quirrell talks about a list of the thirty-seven things he would never do as a Dark Lord.

Eliezer, do you have a full list of 37 things you would never do as a Dark Lord and what's on it?

  1. I will not go around provoking strong, vicious enemies.
  2. Don't Brag
  3. ?
3Richard_Kennaway14y
All of the replies to this should be in the thread for discussing HP&tMoR.
1JoshuaZ14y
This is a reference to the Evil Overlord List. That's why Harry starts snickering. Indeed, it almost is implied that Voldemort wrote the actual evil overlord list. For the most common version of the actual Evil Overlord List see Peter's Evil Overlord List. Having such a list for Voldemort seems to be at least partially just rule of funny.
4MBlume14y
Did the evil overlord list exist publicly in 1991? I was actually a bit confused by Harry's laughter here. Eliezer seems to be working pretty hard to keep things actually in 1991 (truth and beauty, the journal of irreproducible results, etc.)
1JoshuaZ14y
That's a good point. I'm pretty sure the Evil Overlord List didn't exist that far back, at least not publicly. It seems like for references to other fictional or nerd-culture elements he's willing to monkey around with time. Thus for example, there was a Professor Summers for Defense Against the Dark Arts which wouldn't fit with the standard chronology for Buffy at all.
4NancyLebovitz14y
Checking wikipedia, it looks possible but not likely that Harry could have seen the list in 1991.
1Blueberry14y
Well, he and his father are described as being huge science fiction fans, so it's not that unlikely that they heard about the list at conventions, or had someone show them an early version of the list printed from email discussions, even if they didn't have Internet access back then.
0NancyLebovitz14y
I'm pretty sure they did have internet access back then. It was more available through universities than it was to the general public.
1Blueberry14y
I meant even if Harry's parents didn't have access back then, someone could still have printed out the list and showed it to them.
1RomanDavis14y
That doesn't sound very rational. The simplest answer seems to be, "Eliezer thought it would be funny" and he would have included the Evil Overlord List in the fanfic even if the Evil Overlord he was talking about was Caligula.
0Blueberry14y
Of course it was included because Eliezer thought it would be funny. But I don't see what's so irrational about Harry reading the printed copy of the list.
0RomanDavis14y
Yes, but that's not the same as saying Eliezer actually went and looked up the earliest conceivable date to give Harry a reasonable chance of reading the list, or that he could pass the joke up even if he did.
0JoshuaZ14y
Well, would Harry have started laughing if he had just seen just a list before? I'm not sure, but the impression I got was that Harry was laughing because someone had made list identical in form to a well-known geek list. If he had just happened to have seen such a list before, would it be as funny? Moreover, would that be what the reader would have expected to understand from the text?
1RobinZ14y
Maybe 'Quirrell' posted his version to FidoNet.
0JoshuaZ14y
Would not then Harry have noticed that Quirrel's list overlapped with the one he had seen?
1RobinZ14y
Harry did correctly guess Item #2...
0JoshuaZ14y
Good point. That makes it much more plausible. Although given Harry's personality I'd then expect him to test by trying to guess the third and fourth.
0Oscar_Cunningham14y
Good call, although the fic doesn't explicitly mention the evil overlord list.
2RomanDavis14y
The reason I think it might actually be plot relevant is that most people can't resist making a list that is much longer than 37 rules long. Plus most of the rules are just lampshades for tropes that show up again and again in fiction with evil overlords. They rarely are such basic, practical advice as "stop bragging so much."

Ah. I'm pretty sure it isn't a real list because of the number 37. 37 is one of the most common numbers for people to pick when they want to pick a small "random" number. Humans in general are very bad at random number generation. More specifically, they are more likely to pick an odd number, and given a specific range of the form 1 to n, they are most likely to pick a number that is around 3n/4. The really clear examples are from 1 to 4 (around 40% pick 3), 1 to 10 (I don't remember the exact number but I think it is around 30% that pick 7). and then 1 to 50 where a very large percentage will pick 37. The upshot is if you ever see an incomplete list claiming to have 37 items, you should assign a high probability that the rest of the list doesn't exist.

Ouch. I am burned.

3JoshuaZ14y
Well, that's ok. Because I just wrote a review of Chapter 23 criticizing Harry's rush to conclude that magic is a single-allele Mendellian trait and then read your chapter notes where you say the same thing. That should make us even.
2Oscar_Cunningham14y
It just occurred to me that the odd/even bias applies only because we work in base ten. Humans working in a prime base (like base 11) would be much less biased. (in this respect)
0JoshuaZ14y
Well, that seems plausible, although what is going on there is being divisible by 2, not being prime. If your general hypothesis is correct, then if we used a base 9 system numbers divisible by 3 might seem off. However, I'm not aware of any bias against numbers divisible by 5. And there's some evidence that suggests that parity is ingrained human thinking (children can much more easily grasp the notion of whether a number is even or odd, and can do basic arithmetic with even/oddness much faster than with higher moduli).
3Oscar_Cunningham14y
I seared for "human random number" in Google and three of the results were polls on internet fora. Polls A & C were numbers in the range 1 to 10, poll B was in the range 1 to 20. C had the best participation. (By coincidence, I had participated in poll B) I screwed up my experimental design by not thinking of a test before I looked at the results, so if anyone else wants to judge these they should think up a measure of whether certain numbers are preferred before they follow the links. A B C (You have a double post btw)
1RobinZ14y
JoshuaZ's statement implies a peak near 15 for B and outright states 30% of responses to A and C near 7. I would guess that 13 and 17 would be higher than 15 for B and that 7 will still be prominent, and that odd numbers (and, specifically, primes) will be disproportionately represented. I will not edit this comment after posting.
1Blueberry14y
Why primes?
3RobinZ14y
My instinct is that numbers with obvious factors (even numbers and multiples of five especially) will appear less random - and in the range from 1 to 20, that's all the composites.
0[anonymous]14y
Well, that seems plausible, although what is going on there is being divisible by 2, not being prime. If your general hypothesis is correct, then if we used a base 9 system numbers divisible by 3 might seem off. However, I'm not aware of any bias against numbers divisible by 5. And there's some evidence that suggests that parity is ingrained human thinking (children can't much more easily grasp the notion of whether a number is even or odd, and can do basic arithmetic with even/oddness much faster than with higher moduli).
0RomanDavis14y
I have a feeling they are ammunition in Chekov's Gun, and and therefore any attempts to get more data will lead to spoilers.

What does 'consciousness' mean?

I'm having an email conversation with a friend about Nick Bostrom's simulation argument and we're now trying to figure out what the word "consciousness" means in the first place.

People here use the C-word a lot, so it must mean something important. Unfortunately I'm not convinced it means the same thing for all of us. What does the theory that "X is conscious" predict? If we encounter an alien, what would knowing that it was "conscious" or "not conscious" tell us? How about if we encou... (read more)

0Richard_Kennaway14y
What I mean by "consciousness" is my sensation of my own presence. Googling for definitions of "conscious" and "consciousness" gives mostly similar forms of words, so that concept would appear to be what is generally understood by the words. Do philosophers have some other specific generally understood convention of exactly what they mean by these words?
0DanArmak14y
What exactly do you mean by 'sensation'? Does it have to do with "subjective experience" and "qualia", or just the bare fact that you're modeling yourself as part of the world, like RomanDavis and Blueberry's definitions?
3Richard_Kennaway14y
By "sensation" I mean the subjective experience. If you ask me what I mean by "subjective" and "experience", well, you could follow such a train of questions indefinitely and eventually I would have no answer. But what would that prove? You're not asking for a theory of how consciousness works, but a description of the thing that such a theory would be a theory of. Ask someone five centuries ago what they mean by "water" and all they'll be able to say is something like "the wet stuff that flows in rivers and falls from the sky". And you can ask them what they mean by "rivers" and "sky", but to what end? All you're likely to get if you press the matter is some bad science about the four elements. "Consciousness" is in a similar state. I have an experience I label with that word, but I can't tell you how that experience happens.
1DanArmak14y
That's great - I use the word in the same way. As far as I can tell, some other people don't - see the comments by RomanDavis and Blueberry that I linked to. This confusion over the meaning of the word is what I wanted to highlight. The way that some others use the word (to mean "an agent that models itself" or "an agent that perceives itself"), either they have successfully dissolved the question of what subjective experience is, or I don't understand them correctly, or indeed different people use the word to mean different things. And the reason I started out talking about that is that I've seen this cause confusion both on LW and elsewhere.
0RomanDavis14y
There are a lot of hypotheses floating around. Mine is: We have awareness. That is, we observe things in the territory with our senses, and include them in our map of the territory. The phenomenon we observe as consciousness is just our ability to include ourselves (our own minds, and some of it's inner sensations) in the territory. Some people think there are things you can only know if you experience them yourself. In theory, you could run a decent simulation of what it's like to be a bat, but you would still have memories of being human, and therefore awareness of bat territory wouldn't be enough. My solution: implant memories, including bat memories of not having human memories, into yourself. In theory, this should work.
1DanArmak14y
I hope you don't mean you're hypothesizing what the word "consciousness" means; rather, your hypotheses are alternate predictions about physical unknowns or about the future. Which is it? I'm asking what the definition, the meaning, of the word consciousness is. Hypothesizing what a word means feels like the wrong way to do things. Well, unless we're hypothesizing what other people mean when they say "consciousness". But if we're using the word here at LW we shouldn't need to hypothesize, we can all just tell one another what we mean... Under that definition, any agent that models the world and includes its own behavior in the model (and any good general model will do that) - is called conscious. (I would call that self-modeling or self-aware.) So any moderately intelligent, effective agent - like my hypothetical aliens and androids - would be called conscious. That's a fine definition, but if everyone thought that, there would be no place for arguments about whether it's possible for zombies (let alone p-zombies) to exist. It doesn't seem to me that people see consciousness as meaning merely self-modeling.
1RomanDavis14y
I think consensus here is that the idea of P Zombies is silly.
0DanArmak14y
Certainly. But is the idea of ordinary zombies also silly? That's what your definition implies. ETA: not that I'm against that conclusion. It would make things so much simpler :-) I just have the experience that many people mean something else by "consciousness", something that would allow for zombies.
0RomanDavis14y
What's the difference?
0DanArmak14y
If you define "consciousness" in a way that allows for unconscious but intelligent, even human-equivalent agents, then those are called zombies. Aliens or AIs might well turn out to be zombies. Peter Watt's vampires from Blindsight are zombies. ETA: a p-zombie is physically identical to a conscious human, but is still unconscious. (And we agree that makes no sense). A zombie is physically different from a conscious human, and as a result is unconscious - but is capable of all the behavior that humans are capable of. (My original comment was wrong (thanks Blueberry!) and said: The difference between a zombie and a p-zombie is that p-zombies claim to be conscious, while zombies neither claim nor believe to be conscious.)
3Blueberry14y
This is very different from my understanding of the definition of those terms, which is that p-zombies are physically identical to a conscious human, and a zombie is an unconscious human-equivalent with a physical, neurological difference. I don't see any reason why an unconscious human-equivalent couldn't erroneously claim to be conscious, any more than an unconscious computer could print out the sentence "I am conscious."
1DanArmak14y
You're right. It's what I meant, but I see that my explanation came out wrong. I'll fix it. That's true. But the fact of the matter would be that such a zombie would be objectively wrong in its claim to be conscious. My question is: what is being conscious defined to mean? If it's a property that is objectively present or not present and that you can be wrong about in this way, then it must be something more than a "pure subjective" experience or quale.
0torekp14y
If a subjective experience is the same event, differently described, as a neural process, you can be wrong about whether you are having it. You can also be wrong about whether you and another being share the same or similar quale, especially if you infer such similarity solely from behavioral evidence. Even aside from physical-side-of-the-same-coin considerations, a person can be mistaken about subjective experience. A tries the new soup at the restaurant and says "it tastes just like chicken". B says, "No, it tastes like turkey." A accepts the correction (and not just that it tastes like turkey to B). The plausibility of this scenario shows that we can be mistaken about qualia. Now, admittedly, that's a long way from being mistaken about whether one has qualia at all - but to rule that possibility in or out, we have to make some verbal choices clarifying what "qualia" will mean. Roughly speaking, I see at least two alternatives for understanding "qualia". One would be to trot out a laundry list of human subjective feels: color sensations, pain, pleasure, tastes, etc., and then say "this kind of thing". That leaves the possibility of zombies wide open, since intelligent behavior is no guarantee of a particular familiar mental mechanism causing that behavior. (Compare: I see a car driving down the road, doing all the things an internal combustion engine-powered vehicle can do. That's no guarantee that internal combustion occurs within it.) A second approach would be to define "qualia" by its role in the cognitive economy. Very roughly speaking, qualia are properties highly accessible to "executive function", which properties go beyond (are individuated more finely than by) their roles in representing, for the cognizer, the objective world. On this understanding of "qualia" zombies might be impossible - I'm not sure.
0Blueberry14y
Well, the claim would be objectively incorrect; I'm not sure it's meaningful to say that the zombie would be wrong. As others have commented, it's having the capacity to model oneself and one's perceptions of the world. If p-zombies are impossible, which they are, there are no "pure subjective" experiences: any entity's subjective experience corresponds to some objective feature of its brain or programming.
4DanArmak14y
That's not the definition that seems to be used in many of the discussions about consciousness. For instance, the term "Hard Problem of Consciousness" isn't talking about self-modeling. Let's take the discussion about p-zombies as an example. P-zombies are physically identical to normal humans, so they (that is, their brains) clearly model themselves and their own perceptions of the world. Then the claim that they are unconscious is in direct contradiction to the definition of consciousness. If proving that p-zombies are logically impossible was as simple as pointing this out, the whole debate wouldn't exist. Beyond that example, I've gone through all LW posts that have "conscious" in their title: * The Conscious Sorites Paradox, part of Eliezer's series on quantum physics. He says: And then he says: I read that as using 'consciousness' to mean experience in the sense of subjective qualia. * Framing Consciousness. cousin_it has retracted the post, but apparently not for reasons relevant to us here. It talks about "conscious/subjective experiences", and asks whether consciousness can be implemented on a Turing machine. Again, it's clear that a system that recursively models itself can be implemented on a TM, so that can't be what's being discussed. * MWI, weird quantum experiments and future-directed continuity of conscious experience. Clearly uses "consciousness" to mean "subjective experience". * Consciousness. Ditto. * Outline of a lower bound for consciousness. I don't understand this post at first sight - would have to read it more throughly... The reason "subjective experience" is called subjective is that it's presumed not to be part of the objective, material world. That definition is dated now, of course. I don't want to turn this thread into a discussion of what consciousness is, or what subjective experience is. That's a discussion I'd be very interested in, but it should be separate. My original question was, what do people mean by "consc
0RomanDavis14y
Lets say you're having a subjective experience. Say, being stung by a wasp. How do you know? Right. You have to have to be a ware of yourself, and your skin, and have pain receptors, and blah blah blah. But if you couldn't feel the pain, let's say because you were numb, you would still feel conscious. And if you were infected with a virus that made a wasp sting feel sugary and purple, rather than itchy and painful, you would also still be conscious. It's only when you don't have a model of yourself that consciousness becomes impossible.
0DanArmak14y
That doesn't mean they're the same thing. Unless you define them to mean the same thing. But as I described above, not everyone does that. There is no "Hard Problem of Modeling Yourself".
0Jack14y
Where the heck is this terminology coming from? As I learned it the 'philosophical' in "philosophical zombie" is just there to distinguish it from Romero-imagined brain-eating undead.
1Blueberry14y
Yes, but we need some other term for "unconscious human-like entity". I read one paper that used the terms "p-zombie" and "b-zombie", where the p stood for "physical" as well as "philosophical" and the b stood for "behavioral".
0Jack14y
I'd rather call the first an n-zombie (meaning neurologically identical to a human). And, yeah, lets use b-zombie instead of zombie as all of these are varieties of philosophical zombie. (But yes they're just words. Thanks for clarifying.)
0Vladimir_Nesov14y
P-zombies can write philosophical papers on p-zombies.
0RomanDavis14y
Oh, P Zombies are just the reductio ad absurdum version? Yeah, I don't believe in Zombies.
0JoshuaZ14y
P-zombies aren't just reducio ad absurda although most of LW does consider them to be. David Chalmers, who is a very respected philosopher takes the idea quite seriously as do a surprisingly large number of other philosophers.
0RomanDavis14y
Please explain to me how it is not. You can't just say, "This smart guy takes this very seriously." Aristotle took a lot of things very seriously that turned out to be nonsense.
2RichardChappell14y
'Zombie Review' provides some background here...
1JoshuaZ14y
My point is that it isn't regarded in general as a a reducio. Indeed, it actually was originally constructed as an argument against physicalism. I see it as a reducio also or even more to the point as an indication of how much into a corner dualism has been pushed by science. The really scary thing is that some philosophers seem to think that P-zombies are a slam-dunk argument for dualism.
1Jack14y
Who?
0JoshuaZ14y
Nagel and Chalmers all seem to think it is a strong argument. Kirk used to think it was but since then has gone about Pi radians on that. My impression is that Block also sees it as a strong argument but I haven't actually read anything by Block. That's the impression I get from seeing Block mentioned in passing.
2RichardChappell14y
Thinking it's a strong argument is, of course, still a long way from thinking it's a "slam dunk" (nobody that I'm aware of thinks that).
1JoshuaZ14y
Yeah, that wording may be too strong, although the impression I get certainly is that Kirk was convinced it was a slam dunk for quite some time. Kirk's book "Zombies and Consciousness" (which I've only read parts of) seems to describe him as having once considered to be pretty close to a slamdunk. But yeah, my wording was probably too strong.
0RomanDavis14y
Okay, I agree. It's just really easy to take the explicit, "this guy takes it seriously" and make the implicit connection, "and this is totally not a silly idea at all."

What's the deal with female nymphomaniacs? Their existence seems a priori unlikely.

3RomanDavis14y
Then your priors are wrong. Adjust accordingly.
7Liron14y
"What's the deal with" means "What model would have generated a higher prior probability for". Noticing your confusion isn't the entire solution.
8Mitchell_Porter14y
If the existing model is sexual dimorphism, with high sexual desire a male trait, you could simply suppose that it's a "leaky" dimorphism, in which the sex-linked traits nonetheless show up in the other sex with some frequency. In humans this should especially be possible with male traits which depend not on the Y chromosome, but rather on having one X chromosome rather than two. That means that there is only one copy, rather than two, of the relevant gene, which means trait variance can be greater - in a woman, an unusual allele on one X chromosome may be diluted by a normal allele on the other X, whereas a man with an unusual X allele has no such counterbalance. But it would still be easy enough for a woman to end up with an unusual allele on both her Xs. Also, regardless of the specific genetic mechanism, human dimorphism is just not very extreme or absolute (compared to many other species), and forms intermediate between stereotypical male and female extremes are quite common.
1RomanDavis14y
I thought it was pretty clear. Sexual Dimorphism doesn't operate the way you think it does. Women with high sex drives aren't rare at all. I have heard that, for most men and most women, the time of highest sex drive happens at very different times (much younger for men than women). This might account for the entire difference, especially if your'e getting most of your information from the culture at large. As TVTropes will tell you, Most Writers Are Male.
3Vladimir_M14y
Why?
3gwern14y
And they are accordingly rare, are they not?
3Blueberry14y
No, women with a high sex drive are not rare.
0Liron14y
Maybe. I don't know.
1Richard_Kennaway14y
This question reads to me like it's out of the middle of some discussion I didn't hear the beginning of. Why were "nymphomaniacs" on your mind in the first place? What do you mean by the word? I don't think I've heard it in many years, and I associate it with the sexual superstitions of a former age.
1LucasSloan14y
What does the word "nymphomaniacs" mean? How do you judge someone to be sufficiently obsessed with sex to be a nymphomaniac? I think a lot of your confusion might be coming from you tendency to label people with this word with such negative connotations. Does the question "what is with women who want to have sex [five times a week*] and will undertake to get it?" resolve any of your confusion? You should expect that those women who have more sex to be more salient wrt people talking about them, so they would seem more prominent, even if only 2% of the population. *not sure about this number, just picked one that seemed alright.
5Alicorn14y
Five times a week wouldn't be remotely enough to diagnose. It has to be problematic and clinically significant.
2LucasSloan14y
I think that's kinda my point. I was attempting to point out that he's probably confusing the term "nymphomaniac" with its negative connotations, with "likes to have [vaguely defined 'a lot'] of sex."
3Blueberry14y
"Nymphomaniac" hasn't been a clinical diagnosis for a long time. In my experience, the word is now most commonly used colloquially to mean "a woman who likes to have a lot of sex". Whether this has negative connotations depends on your attitude to sex, I suppose.
2JoshuaZ14y
Picking a number for this seems like a really bad idea. For most modern clinical definitions of disorders what matters is whether it interferes with normal daily behavior. Even that is questionable since what constitutes interference is very hard to tell. Societies have had very different notions of what is acceptable sexuality for both males and females. Until fairly recent homosexuality was considered a mental disorder in the US. And in the Victorian era, women were routinely diagnosed as nymphomaniacs for showing pretty minimal signs of sexuality.
0[anonymous]14y
This is one of the more bizarre things I've read recently.

In A Technical Explanation of Technical Explanation, Eliezer writes,

You should only assign a calibrated confidence of 98% if you're confident enough that you think you could answer a hundred similar questions, of equal difficulty, one after the other, each independent from the others, and be wrong, on average, about twice. We'll keep track of how often you're right, over time, and if it turns out that when you say "90% sure" you're right about 7 times out of 10, then we'll say you're poorly calibrated.

...

What we mean by "probability"

... (read more)
7Morendil14y
As I understand it, frequentism requires large numbers of events for its interpretation of probability, whereas the bayesian interpretation allows the convergence of relative frequencies with probabilities but claims that probability is a meaningful concept when applied to unique events, as a "degree of plausibility".
6Vladimir_M14y
Do you (or anyone else reading this) know of any attempts to give a precise non-frequentist interpretation of the exact numerical values of Bayesian probabilities? What I mean is someone trying to give a precise meaning to the claim that the "degree of plausibility" of a hypothesis (or prediction or whatever) is, say, 0.98, which wouldn't boil down to the frequentist observation that relative to some reference class, it would be right 98/100 of the time, as in the above quoted example. Or to put it in a way that might perhaps be clearer, suppose we're dealing with the claim that the "degree of plausibility" of a hypothesis is 0.2. Not 0.19, or 0.21, or even 0.1999 or 0.2001, but exactly that specific value. Now, I have no intuition whatsoever for what it might mean that the "degree of plausibility" I assign to some proposition is equal to one of these numbers and not any of the other mentioned ones -- except if I can conceive of an experiment or observation (or at least a thought-experiment) that would yield that particular exact number via a frequentist ratio. I'm not trying to open the whole Bayesian vs. frequentist can of worms at this moment; I'd just like to find out if I've missed any significant references that discuss this particular question.
2Wei Dai14y
Have you seen my What Are Probabilities, Anyway? post?
1Vladimir_M14y
Yes, I remember reading that post a while ago when I was still just lurking here. But I forgot about it in the meantime, so thanks for bringing it to my attention again. It's something I'll definitely need to think about more.
0Morendil14y
In the Bayesian interpretation, the numerical value of a probability is derived via considerations such as the principle of indifference - if I know nothing more about propositon A than I know about proposition B, then I hold both equally probable. (So, if all I know about a coin is that it is a biased coin, without knowing how it is biased, I still hold Heads or Tails equally probable outcomes of the next coin flip.) If we do know something more about A or B, then we can apply formulae such as the sum rule or product rule, or Bayes' rule which is derived from them, to obtain a "posterior probability" based on our initial estimation (or "prior probability"). (In the coin example, I would be able to take into account any number of coin flips as evidence, but I would first need to specify through such a prior probability what I take "a biased coin" to mean in terms of probability; whereas a frequentist approach relies only on flipping the coin enough times to reach a given degree of confidence.) (Note, this is my understanding based on having partially read through precisely one text - Jaynes' Probability Theory - on top of some Web browsing; not an expert's opinion.)
0JoshuaZ14y
Yes, you can do this precisely with measure theory, but some will argue that that is nice math but not a philosophically satisfying approach. Edit: A more concrete approach is to just think about it as what bets you should make about possible outcomes.
1Vladimir_M14y
I'm not sure I understand what exactly you have in mind. I am aware of the role of measure theory in the standard modern formalization of probability theory, and how it provides for a neat treatment of continuous probability distributions. However, what I'm interested in is not the math, but the meaning of the numbers in the real world. Bayesians often make claims like, say, "I assign the probability of 0.2 to the hypothesis/prediction X." This is a factual claim, which asserts that some quantity is equal to 0.2, not any other number. This means that those making such claims should be able to point at some observable property of the real world related to X that gives rise to this particular number, not some other one. What I'd like to find out is whether there are attempts at non-frequentist responses to this sort of request. But it seems to me that betting advice is fundamentally frequentist in nature. As far as I can see, the only practical test of whether a betting strategy is good or bad is the expected gain (or loss) it will provide over a large number of bets. [Edit: I should have been more clear here -- I assume that you are not using an incoherent strategy vulnerable to a Dutch book. I had in mind strategies where you respect the axioms of probability, and the only question is which numbers consistent with them you choose.]
1Oscar_Cunningham14y
Bayesians, would say that the probability is (some function of) the expected value of one bet. Frequentists, would say that it is (some function of) the actual value of many bets (as the amount of bets goes to infinity). The whole point of looking at many bets is to make the average value close to the expected value (so that frequentists don't have to think about what "expected" actually means). You never have to say "the expected gain ... over a large number of bets." That would be redundant. What does "expected" actually mean? It's just the probabilty you should bet at to avoid the possibility of being Dutch-booked on any single bet. ETA: When you are being Dutch-booked, you don't get to look at all the offered bets at once and say "hold on a minute, you're trying to trick me". You get given each of the bets one at a time, and you have to bet Bayesianly for each one if you want to avoid any possibility of sure losses.
5Vladimir_M14y
I might be mistaken, but I think this still doesn't answer my question. I understand -- or at least I think I do -- how the Dutch book argument can be used to establish the axioms of probability and the entire mathematical theory that follows from them (including the Bayes theorem). The way I understand it, this argument says that once I've assigned some probability to an event, I must assign all the other probabilities in a way consistent with the probability axioms. For example, if I assign P(A) = 0.3 and P(B) = 0.4, I would be opening myself to a Dutch book if I assigned, say, P(~A) != 0.7 or P(A and B) > 0.3. So far, so good. However, I still don't see what, if anything, the Dutch book argument tells us about the ultimate meaning of the probability numbers. If I claim that the probability of Elbonia declaring war on Ruritania before next Christmas is 0.3, then to avoid being Dutch-booked, I must maintain that the probability of that event not happening is 0.7, and all the other stuff necessitated by the probability axioms. However, if someone comes to me and claims that the probability is not 0.3, but 0.4 instead, in what way could he argue, under any imaginable circumstances and either before or after the fact, that his figure is correct and mine not? What fact observable in physical reality could he point out and say that it's consistent with one number, but not the other? I understand that if we both stick to our different probabilities and make bets based on them, we can get Dutch-booked collectively (someone sells him a bet that pays off $100 if the war breaks out for $39, and to me a bet that pays off $100 in the reverse case for $69 -- and wins $8 whatever happens). But this merely tells us that something irrational is going on if we insist (and act) on different probability estimates. It doesn't tell us, as far as I can see, how one number could be correct, and all others incorrect -- unless we start talking about a large reference class of events and
0Morendil14y
There's nothing mysterious about it as far as I can tell, it's "just math". Give me a six-sided die and I'll compute the probability of it coming up 4 as 1/6. This simple exercise can become more complicated in one of two ways. You can ask me to compute the probability of a more complex event, e.g. "three even numbers in a row". This still has an exact answer. The other complication is if the die is loaded. One way I might find out how that affects its single-throw probabilities is by throwing it a large number of times, but conceivably I can also X-ray the die, find out how its mass is distributed, and deduce from that how the single-throw probabilities differ. (Offhand I'd say that faces closer to the center of mass of the die are more likely to come up, but perhaps the calculation is more interesting than that.) In the case of Elbonia vs Ruritania, the other guy has some information that you don't, perhaps for instance the transcript of an intercepted conversation between the Elbonian ruler and a nearby power assuring the former of their support against any unwarranted Ruritanian aggression: they think the war is more plausible given this information. Further, if you agreed with that person in all other respects, i.e. if his derivation of the probability for war given all other relevant information was also 0.3 absent the interception, and you agreed on how verbal information translated into numbers, then you would have no choice but to also accept the final figure of 0.4 conditioning on the interception. Bayesian probability is presented as an exact system of inference (and Jaynes is pretty convincing on this point, I should add).
8Vladimir_M14y
I agree about Jaynes and the exactness of Bayesian inference. (I haven't read his Probability Theory fully, but I should definitely get to it sometime. I did got through the opening chapters however, and it's indeed mighty convincing.) Yet, I honestly don't see how either Jaynes or your comments answer my question in full, though I seen no significant disagreement with what you've written. Let me try rephrasing my question once more. In natural sciences, when you characterize some quantity with a number, this number must make sense in some empirical way, testable in an experiment, or at least with a thought experiment if a real one isn't feasible in practice. Suppose that you've determined somehow that the temperature of a bowl of water is 300K, and someone asks you what exactly this number means in practice -- why 300, and not 290, or 310, or 299, or 301? You can reply by describing (or even performing) various procedures with that bowl of water that will give predictably different outcomes depending on its exact temperature -- and the results of some such procedures with this particular bowl are consistent only with a temperature of 300K plus/minus some small value that can be made extremely tiny with a careful setup, and not any other numbers. Note that the question posed here is not how we've determined what the temperature of the water is in the first place. Instead, the question is: once we've made the claim that the temperature is some particular number, what practical observation can we make that will show that this particular number is consistent with reality, and others aren't? If an number can't be justified that way, then it is not something science can work with, and there is no reason to consider one value as "correct" and another "incorrect." So now, when I ask the same question about probability, I'm not asking about the procedures we use to derive these numbers. I'm asking: once we've made the claim that the probability of some event is p, what p
0Morendil14y
That question, interesting as it is, is above my pay grade; I'm happy enough when I get the equations to line up the right way. I'll let others tackle it if so inclined.
3[anonymous]14y
Morendil's explanation is, as far as I can tell, correct. What's much more interesting is that examples given in terms of frequencies is required to engage our normal intuitions about probability. There's at least some research that indicates that when questions of estimation and probability are given in terms of frequencies (ie: asking 'how many problems do you think you got correct?' instead of 'what is your confidence for this answer?'), many biases disappear completely.
0Antisuji14y
An engaging video, thanks. The study sounded familiar, so I looked for it... turns out I'd seen the guy's TED talk a while back: http://www.ted.com/talks/dan_pink_on_motivation.html

First I'd like to point out a good interview with Ray Kurzweil, which I found more enjoyable than a lot of his monotonous talks. http://www.motherboard.tv/2009/7/14/singularity-of-ray-kurzweil

As a follow-up, I am curious anyone attempted to mathematically model Ray's biggest and most disputed claim, which is the acceleration rate of technology. Most dispute the claim by pointing out that the data points are somewhat arbitrary and invoke data dredging. It would be interesting if the claim was based on a more of a model basis rather than basically a regressi... (read more)

1JoshuaZ14y
Note that Kurzweil's responded to the data dredging complaint by taking major lists compiled by other people, combining them and showing that they fit a roughly exponential graph. (I don't have a citation for this unfortunately). Edit: I'm not aware of anyone making a model of the sort you envision but it seems to suffer they same problem that Kurzweil has in general which is a potential overemphasis on information processing ability.
0xamdam14y
Why is basing this argument on information processing bad?
2JoshuaZ14y
Information processing isn't the whole story of what we care about. For example, the amount of energy available to societies and the per a capita energy availability both matter. (In fairness, Kurzweil has discussed both of these albeit not as extensively as information issues). Another obvious metric to look at is average lifespan. This is one where one doesn't get an exponential curve. Now, if you assert that most humans will live to at least 50 and so look at life span - 50 in major countries over the last hundred years, then the data starts to look slightly more promising, but Kurzweil's never discussed this as far as I'm aware because he hasn't discussed lifespan issues much at all, except in the most obvious fasion. You can modify the data in other ways also. One of my preferred metrics looks at the average lifespan of people who survive past age 3 (this helps deal with the fact that we've done a lot more to handle infant mortality than we have to actually extend lifespan on the upper end). And when you do this, most gains of lifespan go away.
1xamdam14y
Good points. Still I feel that basing the crux of the argument on information processing is valid, unless the other concerns you mention interfere with it at some point. Is that what you're saying? Good observation about infant mortality; there should be an opposite metric of "% of centenarians", which would be a better measure in this context.
2JoshuaZ14y
%Centenarians might not be a good metric given that one will get an increasing fraction of those as birth rates decline. For the US, going by the data here and here, I get a total of 1.4 10^-4 for the fraction of the US pop that is over 100 in 1990, and a result of 1.7 10^-4 in 2000. But I'm not sure how accurate this data is. For example, in the first of the two links they throw out the 1970 census data as given a clearly too high number. One needs a lot more data points to see if this curve looks exponential (obviously two isn't enough), but the linked paper claims that for the foreseeable future the fraction of the pop that will be over 100 will increase by 2/3rds each decade. If that is accurate, then that means we are seeing an exponential increase. Another metric to use might be the age of the oldest person by year of birth worldwide. That data shows a clear increasing trend, but the trend is very weak. Also, one would expect such an increase simply by increasing the general population (Edit: and better record keeping since the list includes only those with good verification), so without a fair bit of statistical crunching, it isn't clear that this data shows anything.
1JoshuaZ14y
Well, they do interfere, for example, lifespan issues help tell us if we're actually taking advantage of the exponential growth in information processing, or for that matter if even if we are taking advantage that it actually matters. If for example information processing ability increases exponentially but the marginal difficulty in improving other things (like say lifespan) increases at a faster rate, then even with an upsurge in information processing one isn't necessarily going to see much in the way of direct improvements. Information processing is also clearly limited in use based on energy availability. If I went back to say 1950 and gave someone access to a set of black boxes that mimic modern computers, the overall rate of increase in tech won't be that high, because the information processing ability while sometimes the rate limiting step, often is not (for example, generation of new ideas and speed at which prototypes can be constructed and tested both matter). And this is even more apparent if I go further back in time. The timespan from 1900 to 1920 won't look very different with those boxes added, to a large extent because people don't know how to take advantage of their ability. So there are a lot of constraints other than just information processing and transmission capability. Edit: Information processing might potentially work as one measure among a handful but by itself it is very crude.

I couldn't post a article due to lack of karma so I had to post here:P

I notice this site is pretty much filled with proponents of MWI, so I thought it'd be interresting to see if there are anyone on here who are actually against MWI, and if so, why?

After reading through some posts it seems the famous Probability, Preferred Basis and Relativity problems are still unsolved.

Are there any more?

1JamesPfeiffer14y
Welcome! Here is a comment by Mitchell Porter. http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/1csi
1torekp14y
Seconding Mitchell Porter's friendly attitude toward the Transactional Interpretation, I recommend this paper by Ruth Kastner and John Cramer.

Any recommendations for how much redundancy is needed to make ideas more likely to be comprehensible?

There's a general rule in writing that if you don't know how many items to put in a list, you use three. So if you're giving examples and you don't know how many to use, use three. Don't know if that helps, but it's the main heuristic I know that's actually concrete.

8SoullessAutomaton14y
I'm not sure I follow. Could you give a couple more examples of when to use this heuristic?
6[anonymous]14y
The only guideline I'm familiar with is "Tell me three times - tell me what you're going to explain, then explain it, then tell me what you just explained." This seems to work on multiple scales - from complete books to shorter essays (though I'm not sure if it works on the level of individual paragraphs).
0dclayh14y
I believe that's called the Bellman's Rule.
4[anonymous]14y
It really depends upon the topic and upon how much inferential difference there is between your ideas and the reader's understanding of the topic. Eliezer's earlier posts are easily understandable to someone with no prior experience in statistics, cognitive science, etc. because he uses a number of examples and metaphors to clearly illustrate his point. In fact, it might be helpful to use his posts as a metric to help answer your question. In general, though, it's probably best to repeat yourself by summarizing your point at both the beginning and end of your essay/post/whatever and by using several examples to illustrate whatever you are talking about, especially if writing for non-experts.

I sometimes look at human conscious thought as software which is running on partially re-programmable hardware.

The hardware can be reprogrammed by two actors - the conscious one, mostly indirectly, and the unconscious one, which seems to have direct access to the wiring of the whole mechanism (including the bits that represent the conscious actor).

I haven't yet seen a coherent discussion of this kind of model - maybe it exists and I'm missing it. Is there already a coherent discussion of this point of view on this site, or somewhere else?

1Jordan14y
I look at conscious thought like a person trying to simultaneously ride multiple animals. Each animal can manage itself, if left to it's own devices it'll keep on walking in some direction, perhaps even a good one. The rider can devote different levels of attention to any given animal, but his level of control bottoms out at some point: he can't control the muscles of the animals, only the trajectory (and not always this). One animal might be vision: it'll go on recognizing and paying attention to things unspurred, but the rider can rein the animal in and make it focus on one particular object, or even one point on that object. The animals all interact with each other, and sometimes it's impossible to control one after being incited by another. And of course, the rider only has so much attention to devote to the numerous beasts, and often can only wrangle one or two at time. Some riders even have reins on themselves.
0pjeby14y
It's a little old, but there's always The Multiple Self.
0NancyLebovitz14y
I think that's a part of PJEby's theories.
[-][anonymous]14y00

In the same vein as Roko's investigation of LessWrong's neurotypicalness, I'd be interested to know the spread of Myers-Briggs personality types that we have here. I'd guess that we have a much higher proportion of INTPs than the general population.

Online Myers-Briggs test can be found here, though I'm not sure how accurate it is

2[anonymous]14y
del
0AdeleneDawner14y
http://lesswrong.com/lw/2a5/on_enjoying_disagreeable_company/22ga That's a small sample, but we actually seem to score below average on Conscientiousness. Of the 7 responses to that request, the Conscientiousness scores were 1, 1, 8, 13, 41, 41, and 58.
1[anonymous]14y
Add another C5. Does not surprise me, as per all the akrasia talk here around.
0mattnewport14y
I tend to score very high on openness to experience and average to low on extraversion but only average to low on conscientiousness.
1JoshuaZ14y
There are a lot of problems with Myers-Briggs. For example, the test doesn't account for people saying things because they are considered socially good traits. Claims that Myers-Briggs is accurate seem often to be connected to the Forer effect. A paper which discusses these issues is Boyle's "Myers-Briggs Type Indicator (MBTI): Some psychometric limitations", 1995 Australian Psychologist 30, 71–74.

Anyone here live in California? Specifically, San Diego county?

The judicial election on June 8th has been subject to a campaign by a Christian conservative group. You probably don't want them to win, and this election is traditionally a low turnout one, so you might want to put a higher priority on this judicial election than you normally would. In other words, get out there and vote!