Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
The Buddhists believe that one of the three keys to attaining true happiness is dissolving the illusion of the self. (The other two are dissolving the illusion of permanence, and ceasing the desire that leads to suffering.) I'm not really sure exactly what it means to say "the self is an illusion", and I'm not exactly sure how that will lead to enlightenment, but I do think one can easily take the first step on this long journey to happiness by beginning to dissolve the sense of one's identity.
Previously, in "Keep Your Identity Small", Paul Graham showed how a strong sense of identity can lead to epistemic irrationally, when someone refuses to accept evidence against x because "someone who believes x" is part of his or her identity. And in Kaj Sotala's "The Curse of Identity", he illustrated a human tendency to reinterpret a goal of "do x" as "give the impression of being someone who does x". These are both fantastic posts, and you should read them if you haven't already.
Here are three more ways in which identity can be a curse.
1. Don't be afraid to change
James March, professor of political science at Stanford University, says that when people make choices, they tend to use one of two basic models of decision making: the consequences model, or the identity model. In the consequences model, we weigh the costs and benefits of our options and make the choice that maximizes our satisfaction. In the identity model, we ask ourselves "What would a person like me do in this situation?"1
The author of the book I read this in didn't seem to take the obvious next step and acknowledge that the consequences model is clearly The Correct Way to Make Decisions and basically by definition, if you're using the identity model and it's giving you a different result then the consequences model would, you're being led astray. A heuristic I like to use is to limit my identity to the "observer" part of my brain, and make my only goal maximizing the amount of happiness and pleasure the observer experiences, and minimizing the amount of misfortune and pain. It sounds obvious when you lay it out in these terms, but let me give an example.
Alice is a incoming freshman in college trying to choose her major. In Hypothetical University, there are only two majors: English, and business. Alice absolutely adores literature, and thinks business is dreadfully boring. Becoming an English major would allow her to have a career working with something she's passionate about, which is worth 2 megautilons to her, but it would also make her poor (0 mu). Becoming a business major would mean working in a field she is not passionate about (0 mu), but it would also make her rich, which is worth 1 megautilon. So English, with 2 mu, wins out over business, with 1 mu.
However, Alice is very bright, and is the type of person who can adapt herself to many situations and learn skills quickly. If Alice were to spend the first six months of college deeply immersing herself in studying business, she would probably start developing a passion for business. If she purposefully exposed herself to certain pro-business memeplexes (e.g. watched a movie glamorizing the life of Wall Street bankers), then she could speed up this process even further. After a few years of taking business classes, she would probably begin to forget what about English literature was so appealing to her, and be extremely grateful that she made the decision she did. Therefore she would gain the same 2 mu from having a job she is passionate about, along with an additional 1 mu from being rich, meaning that the 3 mu choice of business wins out over the 2 mu choice of English.
However, the possibility of self-modifying to becoming someone who finds English literature boring and business interesting is very disturbing to Alice. She sees it as a betrayal of everything that she is, even though she's actually only been interested in English literature for a few years. Perhaps she thinks of choosing business as "selling out" or "giving in". Therefore she decides to major in English, and takes the 2 mu choice instead of the superior 3 mu.
(Obviously this is a hypothetical example/oversimplification and there are a lot of reasons why it might be rational to pursue a career path that doesn't make very much money.)
It seems to me like human beings have a bizarre tendency to want to keep certain attributes and character traits stagnant, even when doing so provides no advantage, or is actively harmful. In a world where business-passionate people systematically do better than English-passionate people, it makes sense to self-modify to become business-passionate. Yet this is often distasteful.
For example, until a few weeks ago when I started solidifying this thinking pattern, I had an extremely adverse reaction to the idea of ceasing to be a hip-hop fan and becoming a fan of more "sophisticated" musical genres like jazz and classical, eventually coming to look down on the music I currently listen to as primitive or silly. This doesn't really make sense - I'm sure if I were to become a jazz and classical fan I would enjoy those genres at least as much as I currently enjoy hip hop. And yet I had a very strong preference to remain the same, even in the trivial realm of music taste.
Probably the most extreme example is the common tendency for depressed people to not actually want to get better, because depression has become such a core part of their identity that the idea of becoming a healthy, happy person is disturbing to them. (I used to struggle with this myself, in fact.) Being depressed is probably the most obviously harmful characteristic that someone can have, and yet many people resist self-modification.
Of course, the obvious objection is there's no way to rationally object to people's preferences - if someone truly prioritizes keeping their identity stagnant over not being depressed then there's no way to tell them they're wrong, just like if someone prioritizes paperclips over happiness there's no way to tell them they're wrong. But if you're like me, and you are interested in being happy, then I recommend looking out for this cognitive bias.
The other objection is that this philosophy leads to extremely unsavory wireheading-esque scenarios if you take it to its logical conclusion. But holding the opposite belief - that it's always more important to keep your characteristics stagnant than to be happy - clearly leads to even more absurd conclusions. So there is probably some point on the spectrum where change is so distasteful that it's not worth a boost in happiness (e.g. a lobotomy or something similar). However, I think that in actual practical pre-Singularity life, most people set this point far, far too low.
2. The hidden meaning of "be yourself"
(This section is entirely my own speculation, so take it as you will.)
"Be yourself" is probably the most widely-repeated piece of social skills advice despite being pretty clearly useless - if it worked then no one would be socially awkward, because everyone has heard this advice.
However, there must be some sort of core grain of truth in this statement, or else it wouldn't be so widely repeated. I think that core grain is basically the point I just made, applied to social interaction. I.e, optimize always for social success and positive relationships (particularly in the moment), and not for signalling a certain identity.
The ostensible purpose of identity/signalling is to appear to be a certain type of person, so that people will like and respect you, which is in turn so that people will want to be around you and be more likely to do stuff for you. However, oftentimes this goes horribly wrong, and people become very devoted to cultivating certain identities that are actively harmful for this purpose, e.g. goth, juggalo, "cool reserved aloof loner", guy that won't shut up about politics, etc. A more subtle example is Fred, who holds the wall and refuses to dance at a nightclub because he is a serious, dignified sort of guy, and doesn't want to look silly. However, the reason why "looking silly" is generally a bad thing is because it makes people lose respect for you, and therefore make them less likely to associate with you. In the situation Fred is in, holding the wall and looking serious will cause no one to associate with him, but if he dances and mingles with strangers and looks silly, people will be likely to associate with him. So unless he's afraid of looking silly in the eyes of God, this seems to be irrational.
Probably more common is the tendency to go to great care to cultivate identities that are neither harmful nor beneficial. E.g. "deep philosophical thinker", "Grateful Dead fan", "tough guy", "nature lover", "rationalist", etc. Boring Bob is a guy who wears a blue polo shirt and khakis every day, works as hard as expected but no harder in his job as an accountant, holds no political views, and when he goes home he relaxes by watching whatever's on TV and reading the paper. Boring Bob would probably improve his chances of social success by cultivating a more interesting identity, perhaps by changing his wardrobe, hobbies, and viewpoints, and then liberally signalling this new identity. However, most of us are not Boring Bob, and a much better social success strategy for most of us is probably to smile more, improve our posture and body language, be more open and accepting of other people, learn how to make better small talk, etc. But most people fail to realize this and instead play elaborate signalling games in order to improve their status, sometimes even at the expense of lots of time and money.
Some ways by which people can fail to "be themselves" in individual social interactions: liberally sprinkle references to certain attributes that they want to emphasize, say nonsensical and surreal things in order to seem quirky, be afraid to give obvious responses to questions in order to seem more interesting, insert forced "cool" actions into their mannerisms, act underwhelmed by what the other person is saying in order to seem jaded and superior, etc. Whereas someone who is "being herself" is more interested in creating rapport with the other person than giving off a certain impression of herself.
Additionally, optimizing for a particular identity might not only be counterproductive - it might actually be a quick way to get people to despise you.
I used to not understand why certain "types" of people, such as "hipsters"2 or Ed Hardy and Affliction-wearing "douchebags" are so universally loathed (especially on the internet). Yes, these people are adopting certain styles in order to be cool and interesting, but isn't everyone doing the same? No one looks through their wardrobe and says "hmm, I'll wear this sweater because it makes me uncool, and it'll make people not like me". Perhaps hipsters and Ed Hardy Guys fail in their mission to be cool, but should we really hate them for this? If being a hipster was cool two years ago, and being someone who wears normal clothes, acts normal, and doesn't do anything "ironically" is cool today, then we're really just hating people for failing to keep up with the trends. And if being a hipster actually is cool, then, well, who can fault them for choosing to be one?
That was my old thought process. Now it is clear to me that what makes hipsters and Ed Hardy Guys hated is that they aren't "being themselves" - they are much more interested in cultivating an identity of interestingness and masculinity, respectively, than connecting with other people. The same thing goes for pretty much every other collectively hated stereotype I can think of3 - people who loudly express political opinions, stoners who won't stop talking about smoking weed, attention seeking teenage girls on facebook, extremely flamboyantly gay guys, "weeaboos", hippies and new age types, 2005 "emo kids", overly politically correct people, tumblr SJA weirdos who identify as otherkin and whatnot, overly patriotic "rednecks", the list goes on and on.
This also clears up a confusion that occurred to me when reading How to Win Friends and Influence People. I know people who have a Dale Carnegie mindset of being optimistic and nice to everyone they meet and are adored for it, but I also know people who have the same attitude and yet are considered irritatingly saccharine and would probably do better to "keep it real" a little. So what's the difference? I think the difference is that the former group are genuinely interested in being nice to people and building rapport, while members of the second group have made an error like the one described in Kaj Sotala's post and are merely trying to give off the impression of being a nice and friendly person. The distinction is obviously very subtle, but it's one that humans are apparently very good at perceiving.
I'm not exactly sure what it is that causes humans to have this tendency of hating people who are clearly optimizing for identity - it's not as if they harm anyone. It probably has to do with tribal status. But what is clear is that you should definitely not be one of them.
3. The worst mistake you can possibly make in combating akrasia
The main thesis of PJ Eby's Thinking Things Done is that the primary reason why people are incapable of being productive is that they use negative motivation ("if I don't do x, some negative y will happen") as opposed to positive motivation ("if i do x, some positive y will happen"). He has the following evo-psych explanation for this: in the ancestral environment, personal failure meant that you could possibly be kicked out of your tribe, which would be fatal. A lot of depressed people make statements like "I'm worthless", or "I'm scum" or "No one could ever love me", which are illogically dramatic and overly black and white, until you realize that these statements are merely interpretations of a feeling of "I'm about to get kicked out of the tribe, and therefore die." Animals have a freezing response to imminent death, so if you are fearing failure you will go into do-nothing mode and not be able to work at all.4
In Succeed: How We Can Reach Our Goals, Phd psychologist Heidi Halvorson takes a different view and describes positive motivation and negative motivation as having pros and cons. However, she has her own dichotomy of Good Motivation and Bad Motivation: "Be good" goals are performance goals, and are directed at achieving a particular outcome, like getting an A on a test, reaching a sales target, getting your attractive neighbor to go out with you, or getting into law school. They are very often tied closely to a sense of self-worth. "Get better" goals are mastery goals, and people who pick these goals judge themselves instead in terms of the progress they are making, asking questions like "Am I improving? Am I learning? Am I moving forward at a good pace?" Halvorson argues that "get better" goals are almost always drastically better than "be good" goals5. An example quote (from page 60) is:
When my goal is to get an A in a class and prove that I'm smart, and I take the first exam and I don't get an A... well, then I really can't help but think that maybe I'm not so smart, right? Concluding "maybe I'm not smart" has several consequences and none of them are good. First, I'm going to feel terrible - probably anxious and depressed, possibly embarrassed or ashamed. My sense of self-worth and self-esteem are going to suffer. My confidence will be shaken, if not completely shattered. And if I'm not smart enough, there's really no point in continuing to try to do well, so I'll probably just give up and not bother working so hard on the remaining exams.
And finally, in Feeling Good: The New Mood Therapy, David Burns describes a destructive side effect of depression he calls "do-nothingism":
One of the most destructive aspects of depression is the way it paralyzes your willpower. In its mildest form you may simply procrastinate about doing a few odious chores. As your lack of motivation increases, virtually any activity appears so difficult that you become overwhelmed by the urge to do nothing. Because you accomplish very little, you feel worse and worse. Not only do you cut yourself off from your normal sources of stimulation and pleasure, but your lack of productivity aggravates your self-hatred, resulting in further isolation and incapacitation.
Synthesizing these three pieces of information leads me to believe that the worst thing you can possibly do for your akrasia is to tie your success and productivity to your sense of identity/self-worth, especially if you're using negative motivation to do so, and especially if you suffer or have recently suffered from depression or low-self esteem. The thought of having a negative self-image is scary and unpleasant, perhaps for the evo-psych reasons PJ Eby outlines. If you tie your productivity to your fear of a negative self-image, working will become scary and unpleasant as well, and you won't want to do it.
I feel like this might be the single number one reason why people are akratic. It might be a little premature to say that, and I might be biased by how large of a factor this mistake was in my own akrasia. But unfortunately, this trap seems like a very easy one to fall into. If you're someone who is lazy and isn't accomplishing much in life, perhaps depressed, then it makes intuitive sense to motivate yourself by saying "Come on, self! Do you want to be a useless failure in life? No? Well get going then!" But doing so will accomplish the exact opposite and make you feel miserable.
So there you have it. In addition to making you a bad rationalist and causing you to lose sight of your goals, a strong sense of identity will cause you to make poor decisions that lead to unhappiness, be unpopular, and be unsuccessful. I think the Buddhists were onto something with this one, personally, and I try to limit my sense of identity as much as possible. A trick you can use in addition to the "be the observer" trick I mentioned, is to whenever you find yourself thinking in identity terms, swap out that identity for the identity of "person who takes over the world by transcending the need for a sense of identity".
This is my first LessWrong discussion post, so constructive criticism is greatly appreciated. Was this informative? Or was what I said obvious, and I'm retreading old ground? Was this well written? Should this have been posted to Main? Should this not have been posted at all? Thank you.
1. Paraphrased from page 153 of Switch: How to Change When Change is Hard
2. Actually, while it works for this example, I think the stereotypical "hipster" is a bizarre caricature that doesn't match anyone who actually exists in real life, and the degree to which people will rabidly espouse hatred for this stereotypical figure (or used to two or three years ago) is one of the most bizarre tendencies people have.
3. Other than groups that arguably hurt people (religious fundamentalists, PUAs), the only exception I can think of is frat boy/jock types. They talk about drinking and partying a lot, sure, but not really any more than people who drink and party a lot would be expected to. Possibilities for their hated status include that they do in fact engage in obnoxious signalling and I'm not aware of it, jealousy, or stigmatization as hazers and date rapists. Also, a lot of people hate stereotypical "ghetto" black people who sag their jeans and notoriously type in a broken, difficult-to-read form of English. This could either be a weak example of the trend (I'm not really sure what it is they would be signalling, maybe dangerous-ness?), or just a manifestation of racism.
4. I'm not sure if this is valid science that he pulled from some other source, or if he just made this up.
There's a lot of background mess in our mental pictures of the world. We try and be accurate on important issues, but a whole lot of the less important stuff we pick up from the media, the movies, and random impressions. And once these impressions are in our mental pictures, they just don't go away - until we find a fact that causes us to say "huh", and reassess.
Here are three facts that have caused that "huh" in me, recently, and completely rearranged minor parts of my mental map. I'm sharing them here, because that experience is a valuable one.
- Think terrorist attack on Israel - did the phrase "suicide bombing" spring to mind? If so, you're so out of fashion: the last suicide bombing in Israel was in 2008 - a year where dedicated suicide bombers managed the feat of killing a grand total of 1 victim. Suicide bombings haven't happened in Israel for over half a decade.
- Large scale plane crashes seem to happen all the time, all over the world. They must happen at least a few times a year, in every major country, right? Well, if I'm reading this page right, the last time there was an airline crash in the USA that killed more that 50 people was... in 2001 (2 months after 9/11). Nothing on that scale since then. And though there has been crashes on route to/from Spain and France since then, it seems that major air crashes in western countries is something that essentially never happens.
- The major cost of a rocket isn't the fuel, as I'd always thought. It seems that the Falcon 9 rocket costs $54 million per launch, of which fuel is only $0.2 million (or, as I prefer to think of it - I could sell my house to get enough fuel to fly to space). In the difference between those two prices, lies the potential for private spaceflight to low-Earth orbit.
Note: I have no intention of criticizing the person involved. I admire that (s)he made the "right" decision in the end (in my opinion), and I mention it only as an example we could all learn from. I did request permission to use his/her anecdote here. I'll also use the pronoun "he" when really I mean he/she.
Once Pat says “no,” it’s harder to get to “yes” than if you had never asked.
Crocker's rules has this very clear clause, and we should keep it well in mind:
Note that Crocker's Rules does not mean you can insult people; it means that other people don't have to worry about whether they are insulting you. Crocker's Rules are a discipline, not a privilege. Furthermore, taking advantage of Crocker's Rules does not imply reciprocity. How could it? Crocker's Rules are something you do for yourself, to maximize information received - not something you grit your teeth over and do as a favor.
Recently, a rationalist heard over social media that an acquaintance - a friend-of-a-friend - had found their lost pet. They said it was better than winning a lottery. The rationalist responded that unless they'd spent thousands of dollars searching, or posted a large reward, then they're saying something they don't really mean. Then, feeling like a party-pooper and a downer, he deleted his comment.
I believe this was absolutely the correct things to do. As Miss Manners says (http://www.washingtonpost.com/wp-dyn/content/article/2007/02/06/AR2007020601518.html), people will associate unpleasant emotions with the source and the cause. They're not going to say, oh, that's correct; I was mistaken about the value of my pet; thank you for correcting my flawed value system.
Instead they'll say, those rationalists are so heartless, attaching dollar signs to everything. They think they know better. They're rude and stuck up. I don't want to have anything to do with them. And then they'll think walk away with a bad impression of us. (Yes, all of us, for we are a minority now, and each of us reflects upon all of us, the same way a Muslim bomber would reflect poorly on public opinion of all Muslims, while a Christian bomber would not.) In the future they'll be less likely to listen to any one of us.
The only appropriate thing to say in this case is "I'm so happy for you." But that doesn't mean we can't promote ourselves ever. Here are some alternatives.
- At another time, ask for "help" with your own decisions. Go through the process of calculating out all the value and expected values. This is completely non-confrontational, and your friends/acquaintances will not need to defend anything. Whenever they give a suggestion, praise it as being a good idea, and then make a show of weighing the expected value out loud.
- Say "wow, I don't know many people who'd spend that much! Your pet is lucky to have someone like you!" But it must be done without any sarcasm. They might feel a bit uncomfortable taking that much praise. They might go home and mull it over.
- Invite them to "try something you saw online" with you. This thing could be mindcharting, the estimation game, learning quantum physics, meditation, goal refactoring, anything. Emphasize the curiosity/exploring aspect. See if it leads into a conversation about rationality. Don't mention the incident with the pet - it could come off as criticism.
- At a later date, introduce them to Methods or Rationality. Say it's because "it's funny," or "you have a lot of interesting ideas," or even just "I think you'll like it." That's generally a good starting point. :)
- Let it be. First do no harm.
I was told long ago (in regards to LGBT rights) that minds are not changed by logic or reasoning or facts. They are changed over a long period of time by emotions. For us, that means showing what we believe without pressing it on others, while at the same time being the kind of person you want to be like. If we are successful and happy, if we carry ourselves with kindness and dignity, we'll win over hearts.
Konkvistador's excellent List of Blogs by LWers led me to some of my favorite blogs, but is pretty well hidden and gradually becoming obsolete. In order to create an easily-update-able replacement, I have created the wiki page List of Blogs and added most of the blogs from Konkvistador's list. If you have a blog, or you read blogs, please help in the following ways:
-- Add your blog if it's not on there, and if it has updated in the past few months (no dead blogs this time, exceptions for very complete archives of excellent material like Common Sense Atheism in the last section)
-- Add any other blogs you like that are written by LWers or frequently engage with LW ideas
-- Remove your blog if you don't want it on there (I added some prominent critics of LW ideas who might not want to be linked to us)
-- Move your blog to a different category if you don't like the one it's in right now
-- Add a description of your blog, or change the one that already exists
-- Change the name you're listed by (I defaulted to people's LW handles)
-- Bold the name of your blog if it updates near-daily, has a large readership/commentership, and/or gets linked to on LW a lot
-- Improve formatting
Somebody more familiar with the Less Wrong twittersphere might want to do something similar to Grognor's Less Wrong on Twitter
LessWrong has twice discussed the PhilPapers Survey of professional philosophers' views on thirty controversies in their fields — in early 2011 and, more intensively, in late 2012. We've also been having some lively debates, prompted by LukeProg, about the general value of contemporary philosophical assumptions and methods. It would be swell to test some of our intuitions about how philosophers go wrong (and right) by looking closely at the aggregate output and conduct of philosophers, but relevant data is hard to come by.
Fortunately, Davids Chalmers and Bourget have done a lot of the work for us. They released a paper summarizing the PhilPapers Survey results two days ago, identifying, by factor analysis, seven major components consolidating correlations between philosophical positions, influences, areas of expertise, etc.
1. Anti-Naturalists: Philosophers of this stripe tend (more strongly than most) to assert libertarian free will (correlation with factor .66), theism (.63), the metaphysical possibility of zombies (.47), and A theories of time (.28), and to reject physicalism (.63), naturalism (.57), personal identity reductionism (.48), and liberal egalitarianism (.32).
Anti-Naturalists tend to work in philosophy of religion (.3) or Greek philosophy (.11). They avoid philosophy of mind (-.17) and cognitive science (-.18) like the plague. They hate Hume (-.14), Lewis (-.13), Quine (-.12), analytic philosophy (-.14), and being from Australasia (-.11). They love Plato (.13), Aristotle (.12), and Leibniz (.1).
2. Objectivists: They tend to accept 'objective' moral values (.72), aesthetic values (.66), abstract objects (.38), laws of nature (.28), and scientific posits (.28). Note 'Objectivism' is being used here to pick out a tendency to treat value as objectively binding and metaphysical posits as objectively real; it isn't connected to Ayn Rand.
A disproportionate number of objectivists work in normative ethics (.12), Greek philosophy (.1), or philosophy of religion (.1). They don't work in philosophy of science (-.13) or biology (-.13), and aren't continentalists (-.12) or Europeans (-.14). Their favorite philosopher is Plato (.1), least favorites Hume (-.2) and Carnap (-.12).
3. Rationalists: They tend to self-identify as 'rationalists' (.57) and 'non-naturalists' (.33), to accept that some knowledge is a priori (.79), and to assert that some truths are analytic, i.e., 'true by definition' or 'true in virtue of 'meaning' (.72). Also tend to posit metaphysical laws of nature (.34) and abstracta (.28). 'Rationalist' here clearly isn't being used in the LW or freethought sense; philosophical rationalists as a whole in fact tend to be theists.
Rationalists are wont to work in metaphysics (.14), and to avoid thinking about the sciences of life (-.14) or cognition (-.1). They are extremely male (.15), inordinately British (.12), and prize Frege (.18) and Kant (.12). They absolutely despise Quine (-.28, the largest correlation for a philosopher), and aren't fond of Hume (-.12) or Mill (-.11) either.
4. Anti-Realists: They tend to define truth in terms of our cognitive and epistemic faculties (.65) and to reject scientific realism (.6), a mind-independent and knowable external world (.53), metaphysical laws of nature (.43), and the notion that proper names have no meaning beyond their referent (.35).
They are extremely female (.17) and young (.15 correlation coefficient for year of birth). They work in ethics (.16), social/political philosophy (.16), and 17th-19th century philosophy (.11), avoiding metaphysics (-.2) and the philosophies of mind (-.15) and language (-.14). Their heroes are Kant (.23), Rawls (.14), and, interestingly, Hume (.11). They avoid analytic philosophy even more than the anti-naturalists do (-.17), and aren't fond of Russell (-.11).
5. Externalists: Really, they just like everything that anyone calls 'externalism'. They think the content of our mental lives in general (.66) and perception in particular (.55), and the justification for our beliefs (.64), all depend significantly on the world outside our heads. They also think that you can fully understand a moral imperative without being at all motivated to obey it (.5).
6. Star Trek Haters: This group is less clearly defined than the above ones. The main thing uniting them is that they're thoroughly convinced that teleportation would mean death (.69). Beyond that, Trekophobes tend to be deontologists (.52) who don't switch on trolley dilemmas (.47) and like A theories of time (.41).
Trekophobes are relatively old (-.1) and American (.13 affiliation). They are quite rare in Australia and Asia (-.18 affiliation). They're fairly evenly distributed across philosophical fields, and tend to avoid weirdo intuitions-violating naturalists — Lewis (-.13), Hume (-.12), analytic philosophers generally (-.11).
7. Logical Conventionalists: They two-box on Newcomb's Problem (.58), reject nonclassical logics (.48), and reject epistemic relativism and contextualism (.48). So they love causal decision theory, think all propositions/facts are generally well-behaved (always either true or false and never both or neither), and think there are always facts about which things you know, independent of who's evaluating you. Suspiciously normal.
They're also fond of a wide variety of relatively uncontroversial, middle-of-the-road views most philosophers agree about or treat as 'the default' — political egalitarianism (.33), abstract object realism (.3), and atheism (.27). They tend to think zombies are metaphysically possible (.26) and to reject personal identity reductionism (.26) — which aren't metaphysically innocent or uncontroversial positions, but, again, do seem to be remarkably straightforward and banal approaches to all these problems. Notice that a lot of these positions are intuitive and 'obvious' in isolation, but that they don't converge upon any coherent world-view or consistent methodology. They clearly aren't hard-nosed philosophical conservatives like the Anti-Naturalists, Objectivists, Rationalists, and Trekophobes, but they also clearly aren't upstart radicals like the Externalists (on the analytic side) or the Anti-Realists (on the continental side). They're just kind of, well... obvious.
Conventionalists are the only identified group that are strongly analytic in orientation (.19). They tend to work in epistemology (.16) or philosophy of language (.12), and are rarely found in 17th-19th century (-.12) or continental (-.11) philosophy. They're influenced by notorious two-boxer and modal realist David Lewis (.1), and show an aversion to Hegel (-.12), Aristotle (-.11), and and Wittgenstein (-.1).
An observation: Different philosophers rely on — and fall victim to — substantially different groups of methods and intuitions. A few simple heuristics, like 'don't believe weird things until someone conclusively demonstrates them' and 'believe things that seem to be important metaphysical correlates for basic human institutions' and 'fall in love with any views starting with "ext"', explain a surprising amount of diversity. And there are clear common tendencies to either trust one's own rationality or to distrust it in partial (Externalism) or pathological (Anti-Realism, Anti-Naturalism) ways. But the heuristics don't hang together in a single Philosophical World-View or Way Of Doing Things, or even in two or three such world-views.
There is no large, coherent, consolidated group that's particularly attractive to LWers across the board, but philosophers seem to fall short of LW expectations for some quite distinct reasons. So attempting to criticize, persuade, shame, praise, or even speak of or address philosophers as a whole may be a bad idea. I'd expect it to be more productive to target specific 'load-bearing' doctrines on dimensions like the above than to treat the group as a monolith, for many of the same reasons we don't want to treat 'scientists' or 'mathematicians' as monoliths.
Another important result: Something is going seriously wrong with the high-level training and enculturation of professional philosophers. Or fields are just attracting thinkers who are disproportionately bad at critically assessing a number of the basic claims their field is predicated on or exists to assess.
Philosophers working in decision theory are drastically worse at Newcomb than are other philosophers, two-boxing 70.38% of the time where non-specialists two-box 59.07% of the time (normalized after getting rid of 'Other' answers). Philosophers of religion are the most likely to get questions about religion wrong — 79.13% are theists (compared to 13.22% of non-specialists), and they tend strongly toward the Anti-Naturalism dimension. Non-aestheticians think aesthetic value is objective 53.64% of the time; aestheticians think it's objective 73.88% of the time. Working in epistemology tends to make you an internalist, philosophy of science tends to make you a Humean, metaphysics a Platonist, ethics a deontologist. This isn't always the case; but it's genuinely troubling to see non-expertise emerge as a predictor of getting any important question in an academic field right.
EDIT: I've replaced "cluster" talk above with "dimension" talk. I had in mind gjm's "clusters in philosophical idea-space", not distinct groups of philosophers. gjm makes this especially clear:
The claim about these positions being made by the authors of the paper is not, not even a little bit, "most philosophers fall into one of these seven categories". It is "you can generally tell most of what there is to know about a philosopher's opinions if you know how well they fit or don't fit each of these seven categories". Not "philosopher-space is mostly made up of these seven pieces" but "philosopher-space is approximately seven-dimensional".
I'm particularly guilty of promoting this misunderstanding (including in portions of my own brain) by not noting that the dimensions can be flipped to speak of (anti-anti-)naturalists, anti-rationalists, etc. My apologies. As Douglas_Knight notes below, "If there are clusters [of philosophers], PCA might find them, but PCA might tell you something interesting even if there are no clusters. But if there are clusters, the factors that PCA finds won't be the clusters, but the differences between them. [...] Actually, factor analysis pretty much assumes that there aren't clusters. If factor 1 put you in a cluster, that would tell pretty much all there is to say and would pin down your factor 2, but the idea in factor analysis is that your factor 2 is designed to be as free as possible, despite knowing factor 1."
If you haven't seen the video of a wet towel being wrung out in space yet, it provides a great opportunity to test some basic rationality skills.
Skill #1: Notice the opportunity. (I failed this test. I had a fuzzy, wrong idea about what would happen. I didn't notice the fuzziness of my own thinking until after I watched the video, when it was too late to apply basic rationality skills. I'll never know if I could have made a correct prediction.)
Skill #2: Enumerate possibilities.
Skill #3: Incorporate prior information.
Skill #4: Making clear predictions.
Skill #5: Understanding why/how your prediction failed/succeeded.
Skill #6: There may be some things you predicted and some you didn't. Don't forget to notice the partial failures along with the partial success.
You are on the International Space Station. You get a towel soaking wet, then you wring it out. What happens?
If you haven't seen it, don't scroll into the comments.
Don't click the link until you've thought about it!
Despite recent strides in my productivity habits, I still catch myself procrastinating at work more often than I'd like. It's not that I make a conscious decision to put off a project; it just feels as though I wake up 20 minutes later and realize that nothing got accomplished. (Or, to avoid the passive voice and take much-deserved responsibility, I "realize that I haven't accomplished anything".)
I've been looking for techniques to improve, and got a lot out of LukeProg's articles on How to Beat Procrastination and My Algorithm for Beating Procrastination, based on Piers Steel's The Procrastination Equation.
But I also wanted a way to put the principles to use with the lowest activation cost possible. I can't expect unmotivated future-me to be too cooperative; I need to provide him with an easy path to get in flow.
So! I developed a 10-Step Productivity Checklist, pulling the concepts from Luke's articles and adding a couple points that are important for me. Now whenever I notice myself being unproductive I have a much easier time following the steps one by one until I get back in a good mindset to work.
What is the task? Make sure you're going to focus on one thing at a time.
Do you have something to drink? Get yourself some tea, coffee, or water.
Are distractions closed? Shut the door, quit Tweetdeck, close the Facebook and Gmail tabs, and set skype to "Do not disturb."
What music will you listen to inspire yourself to be productive or get in flow? Put on a good instrumental playlist! (I love video game soundtracks, further notes in comments.)
Why are you doing this task? Trace the value until you feel the benefit.
What are the parts to this task? Break things down as much as you can, until they're physical actions if possible.
What are some ways to gamify the task? Try to have fun with it!
What are some rewards you can offer yourself for completing sections of the task? Smiling, throwing your arms up in the air and proclaiming victory, or M&M's all count.
What's an achievable goal for this sitting? Set a reasonable expectation for yourself.
How long will you work until you take a break? Set a timer and commit to focusing.
Get into flow!
I'd love to hear from you:
- Whether these are useful
- Any ideas for good ways to enact these steps
- Steps that should be added/removed/tweaked
- Whether there are other posts/resources that you've found valuable
I hope this helps you as much as it's helping me, and that together we can make it even better!
Shortened version of: Pascal's Muggle: Infinitesimal Priors and Strong Evidence
One proposal which has been floated for dealing with Pascal's Mugger is to penalize hypotheses that let you affect a large number of people, in proportion to the number of people affected - what we could call perhaps a "leverage penalty" instead of a "complexity penalty". This isn't just for Pascal's Mugger in particularly, it seems required to have expected utilities in general converge when the 'size' of scenarios can grow much faster than their algorithmic complexity.
Unfortunately this potentially leads us into a different problem, that of Pascal's Muggle.
Suppose a poorly-dressed street person asks you for five dollars in exchange for doing a googolplex's worth of good using his Matrix Lord powers - say, saving the lives of a googolplex other people inside computer simulations they're running.
"Well," you reply, "I think that it would be very improbable that I would be able to affect so many people through my own, personal actions - who am I to have such a great impact upon events? Indeed, I think the probability is somewhere around one over googolplex, maybe a bit less. So no, I won't pay five dollars - it is unthinkably improbable that I could do so much good!"
"I see," says the Mugger.
A wind begins to blow about the alley, whipping the Mugger's loose clothes about him as they shift from ill-fitting shirt and jeans into robes of infinite blackness, within whose depths tiny galaxies and stranger things seem to twinkle. In the sky above, a gap edged by blue fire opens with a horrendous tearing sound - you can hear people on the nearby street yelling in sudden shock and terror, implying that they can see it too - and displays the image of the Mugger himself, wearing the same robes that now adorn his body, seated before a keyboard and a monitor.
"That's not actually me," the Mugger says, "just a conceptual representation, but I don't want to drive you insane. Now give me those five dollars, and I'll save a googolplex lives, just as promised. It's easy enough for me, given the computing power my home universe offers. As for why I'm doing this, there's an ancient debate in philosophy among my people - something about how we ought to sum our expected utilities - and I mean to use the video of this event to make a point at the next decision theory conference I attend. Now will you give me the five dollars, or not?"
"Mm... no," you reply.
"No?" says the Mugger. "I understood earlier when you didn't want to give a random street person five dollars based on a wild story with no evidence behind it whatsoever. But surely I've offered you evidence now."
"Unfortunately, you haven't offered me enough evidence," you explain.
"Seriously?" says the Mugger. "I've opened up a fiery portal in the sky, and that's not enough to persuade you? What do I have to do, then? Rearrange the planets in your solar system, and wait for the observatories to confirm the fact? I suppose I could also explain the true laws of physics in the higher universe in more detail, and let you play around a bit with the computer program that encodes all the universes containing the googolplex people I would save if you just gave me the damn five dollars -"
"Sorry," you say, shaking your head firmly, "there's just no way you can convince me that I'm in a position to affect a googolplex people, because the prior probability of that is one over googolplex. If you wanted to convince me of some fact of merely 2-100 prior probability, a mere decillion to one - like that a coin would come up heads and tails in some particular pattern of a hundred coinflips - then you could just show me 100 bits of evidence, which is within easy reach of my brain's sensory bandwidth. I mean, you could just flip the coin a hundred times, and my eyes, which send my brain a hundred megabits a second or so - though that gets processed down to one megabit or so by the time it goes through the lateral geniculate nucleus - would easily give me enough data to conclude that this decillion-to-one possibility was true. But to conclude something whose prior probability is on the order of one over googolplex, I need on the order of a googol bits of evidence, and you can't present me with a sensory experience containing a googol bits. Indeed, you can't ever present a mortal like me with evidence that has a likelihood ratio of a googolplex to one - evidence I'm a googolplex times more likely to encounter if the hypothesis is true, than if it's false - because the chance of all my neurons spontaneously rearranging themselves to fake the same evidence would always be higher than one over googolplex. You know the old saying about how once you assign something probability one, or probability zero, you can't update that probability regardless of what evidence you see? Well, odds of a googolplex to one, or one to a googolplex, work pretty much the same way."
"So no matter what evidence I show you," the Mugger says - as the blue fire goes on crackling in the torn sky above, and screams and desperate prayers continue from the street beyond - "you can't ever notice that you're in a position to help a googolplex people."
"Right!" you say. "I can believe that you're a Matrix Lord. I mean, I'm not a total Muggle, I'm psychologically capable of responding in some fashion to that giant hole in the sky. But it's just completely forbidden for me to assign any significant probability whatsoever that you will actually save a googolplex people after I give you five dollars. You're lying, and I am absolutely, absolutely, absolutely confident of that."
"So you weren't just invoking the leverage penalty as a plausible-sounding way of getting out of paying me the five dollars earlier," the Mugger says thoughtfully. "I mean, I'd understand if that was just a rationalization of your discomfort at forking over five dollars for what seemed like a tiny probability, when I hadn't done my duty to present you with a corresponding amount of evidence before demanding payment. But you... you're acting like an AI would if it was actually programmed with a leverage penalty on hypotheses!"
"Exactly," you say. "I'm forbidden a priori to believe I can ever do that much good."
"Why?" the Mugger says curiously. "I mean, all I have to do is press this button here and a googolplex lives will be saved." The figure within the blazing portal above points to a green button on the console before it.
"Like I said," you explain again, "the prior probability is just too infinitesimal for the massive evidence you're showing me to overcome it -"
The Mugger shrugs, and vanishes in a puff of purple mist.
The portal in the sky above closes, taking with the console and the green button.
(The screams go on from the street outside.)
A few days later, you're sitting in your office at the physics institute where you work, when one of your colleagues bursts in through your door, seeming highly excited. "I've got it!" she cries. "I've figured out that whole dark energy thing! Look, these simple equations retrodict it exactly, there's no way that could be a coincidence!"
At first you're also excited, but as you pore over the equations, your face configures itself into a frown. "No..." you say slowly. "These equations may look extremely simple so far as computational complexity goes - and they do exactly fit the petabytes of evidence our telescopes have gathered so far - but I'm afraid they're far too improbable to ever believe."
"What?" she says. "Why?"
"Well," you say reasonably, "if these equations are actually true, then our descendants will be able to exploit dark energy to do computations, and according to my back-of-the-envelope calculations here, we'd be able to create around a googolplex people that way. But that would mean that we, here on Earth, are in a position to affect a googolplex people - since, if we blow ourselves up via a nanotechnological war or (cough) make certain other errors, those googolplex people will never come into existence. The prior probability of us being in a position to impact a googolplex people is on the order of one over googolplex, so your equations must be wrong."
"Hmm..." she says. "I hadn't thought of that. But what if these equations are right, and yet somehow, everything I do is exactly balanced, down to the googolth decimal point or so, with respect to how it impacts the chance of modern-day Earth participating in a chain of events that leads to creating an intergalactic civilization?"
"How would that work?" you say. "There's only seven billion people on today's Earth - there's probably been only a hundred billion people who ever existed total, or will exist before we go through the intelligence explosion or whatever - so even before analyzing your exact position, it seems like your leverage on future affairs couldn't reasonably be less than one in a ten trillion part of the future or so."
"But then given this physical theory which seems obviously true, my acts might imply expected utility differentials on the order of 1010100-13," she explains, "and I'm not allowed to believe that no matter how much evidence you show me."
This problem may not be as bad as it looks; a leverage penalty may lead to more reasonable behavior than depicted above, after taking into account Bayesian updating:
Mugger: "Give me five dollars, and I'll save 3↑↑↑3 lives using my Matrix Powers."
Mugger: "Why not? It's a really large impact."
You: "Yes, and I assign a probability on the order of 1 in 3↑↑↑3 that I would be in a unique position to affect 3↑↑↑3 people."
Mugger: "Oh, is that really the probability that you assign? Behold!"
(A gap opens in the sky, edged with blue fire.)
Mugger: "Now what do you think, eh?"
You: "Well... I can't actually say this has a likelihood ratio of 3↑↑↑3 to 1. No stream of evidence that can enter a human brain over the course of a century is ever going to have a likelihood ratio larger than, say, 101026 to 1 at the absurdly most, assuming one megabit per second of sensory data, for a century, each bit of which has at least a 1-in-a-trillion error probability. You'd probably start to be dominated by Boltzmann brains or other exotic minds well before then."
Mugger: "So you're not convinced."
You: "Indeed not. The probability that you're telling the truth is so tiny that God couldn't find it with an electron microscope. Here's the five dollars."
Mugger: "Done! You've saved 3↑↑↑3 lives! Congratulations, you're never going to top that, your peak life accomplishment will now always lie in your past. But why'd you give me the five dollars if you think I'm lying?"
You: "Well, because the evidence you did present me with had a likelihood ratio of at least a billion to one - I would've assigned less than 10-9 prior probability of seeing this when I woke up this morning - so in accordance with Bayes's Theorem I promoted the probability from 1/3↑↑↑3 to at least 109/3↑↑↑3, which when multiplied by an impact of 3↑↑↑3, yields an expected value of at least a billion lives saved for giving you five dollars."
I confess that I find this line of reasoning a bit suspicious - it seems overly clever - but at least on the level of intuitive-virtues-of-rationality it doesn't seem completely stupid in the same way as Pascal's Muggle. This muggee is at least behaviorally reacting to the evidence. In fact, they're reacting in a way exactly proportional to the evidence - they would've assigned the same net importance to handing over the five dollars if the Mugger had offered 3↑↑↑4 lives, so long as the strength of the evidence seemed the same.
But I still feel a bit nervous about the idea that Pascal's Muggee, after the sky splits open, is handing over five dollars while claiming to assign probability on the order of 109/3↑↑↑3 that it's doing any good. My own reaction would probably be more like this:
Mugger: "Give me five dollars, and I'll save 3↑↑↑3 lives using my Matrix Powers."
Mugger: "So then, you think the probability I'm telling the truth is on the order of 1/3↑↑↑3?"
Me: "Yeah... that probably has to follow. I don't see any way around that revealed belief, given that I'm not actually giving you the five dollars. I've heard some people try to claim silly things like, the probability that you're telling the truth is counterbalanced by the probability that you'll kill 3↑↑↑3 people instead, or something else with a conveniently exactly equal and opposite utility. But there's no way that things would balance out that neatly in practice, if there was no a priori mathematical requirement that they balance. Even if the prior probability of your saving 3↑↑↑3 people and killing 3↑↑↑3 people, conditional on my giving you five dollars, exactly balanced down to the log(3↑↑↑3) decimal place, the likelihood ratio for your telling me that you would "save" 3↑↑↑3 people would not be exactly 1:1 for the two hypotheses down to the log(3↑↑↑3) decimal place. So if I assigned probabilities much greater than 1/3↑↑↑3 to your doing something that affected 3↑↑↑3 people, my actions would be overwhelmingly dominated by even a tiny difference in likelihood ratio elevating the probability that you saved 3↑↑↑3 people over the probability that you did something equally and oppositely bad to them. The only way this hypothesis can't dominate my actions - really, the only way my expected utility sums can converge at all - is if I assign probability on the order of 1/3↑↑↑3 or less. I don't see any way of escaping that part."
Mugger: "But can you, in your mortal uncertainty, truly assign a probability as low as 1 in 3↑↑↑3 to any proposition whatever? Can you truly believe, with your error-prone neural brain, that you could make 3↑↑↑3 statements of any kind one after another, and be wrong, on average, about once?"
Mugger: "So give me five dollars!"
Mugger: "Why not?"
Me: "Because even though I, in my mortal uncertainty, will eventually be wrong about all sorts of things if I make enough statements one after another, this fact can't be used to increase the probability of arbitrary statements beyond what my prior says they should be, because then my prior would sum to more than 1. There must be some kind of required condition for taking a hypothesis seriously enough to worry that I might be overconfident about it -"
Mugger: "Then behold!"
(A gap opens in the sky, edged with blue fire.)
Mugger: "Now what do you think, eh?"
Me (staring up at the sky): "...whoa." (Pause.) "You turned into a cat."
Me: "Private joke. Okay, I think I'm going to have to rethink a lot of things. But if you want to tell me about how I was wrong to assign a prior probability on the order of 1/3↑↑↑3 to your scenario, I will shut up and listen very carefully to what you have to say about it. Oh, and here's the five dollars, can I pay an extra twenty and make some other requests?"
(The thought bubble pops, and we return to two people standing in an alley, the sky above perfectly normal.)
Mugger: "Now, in this scenario we've just imagined, you were taking my case seriously, right? But the evidence there couldn't have had a likelihood ratio of more than 101026 to 1, and probably much less. So by the method of imaginary updates, you must assign probability at least 10-1026 to my scenario, which when multiplied by a benefit on the order of 3↑↑↑3, yields an unimaginable bonanza in exchange for just five dollars -"
Mugger: "How can you possibly say that? You're not being logically coherent!"
Me: "I agree that I'm being incoherent in a sense, but I think that's acceptable in this case, since I don't have infinite computing power. In the scenario you're asking me to imagine, you're presenting me with evidence which I currently think Can't Happen. And if that actually does happen, the sensible way for me to react is by questioning my prior assumptions and reasoning which led me to believe I shouldn't see it happen. One way that I handle my lack of logical omniscience - my finite, error-prone reasoning capabilities - is by being willing to assign infinitesimal probabilities to non-privileged hypotheses so that my prior over all possibilities can sum to 1. But if I actually see strong evidence for something I previously thought was super-improbable, I don't just do a Bayesian update, I should also question whether I was right to assign such a tiny probability in the first place - whether the scenario was really as complex, or unnatural, as I thought. In real life, you are not ever supposed to have a prior improbability of 10-100 for some fact distinguished enough to be written down, and yet encounter strong evidence, say 1010 to 1, that the thing has actually happened. If something like that happens, you don't do a Bayesian update to a posterior of 10-90. Instead you question both whether the evidence might be weaker than it seems, and whether your estimate of prior improbability might have been poorly calibrated, because rational agents who actually have well-calibrated priors should not encounter situations like that until they are ten billion days old. Now, this may mean that I end up doing some non-Bayesian updates: I say some hypothesis has a prior probability of a quadrillion to one, you show me evidence with a likelihood ratio of a billion to one, and I say 'Guess I was wrong about that quadrillion to one thing' rather than being a Muggle about it. And then I shut up and listen to what you have to say about how to estimate probabilities, because on my worldview, I wasn't expecting to see you turn into a cat. But for me to make a super-update like that - reflecting a posterior belief that I was logically incorrect about the prior probability - you have to really actually show me the evidence, you can't just ask me to imagine it. This is something that only logically incoherent agents ever say, but that's all right because I'm not logically omniscient."
When I add up a complexity penalty, a leverage penalty, and the "You turned into a cat!" logical non-omniscience clause, I get the best candidate I have so far for the correct decision-theoretic way to handle these sorts of possibilities while still having expected utilities converge.
As mentioned in the longer version, this has very little in the way of relevance for optimal philanthropy, because we don't really need to consider these sorts of rules for handling small large numbers on the order of a universe containing 1080 atoms, and because most of the improbable leverage associated with x-risk charities is associated with discovering yourself to be an Ancient Earthling from before the intelligence explosion, which improbability (for universes the size of 1080 atoms) is easily overcome by the sensory experiences which tell you you're an Earthling. For more on this see the original long-form post. The main FAI issue at stake is what sort of prior to program into an AI.
Theories of logical uncertainty are theories which can assign probability to logical statements. Reflective theories are theories which know something about themselves within themselves. In Paul's theory, there is an external P, in the meta language, which assigns probabilities to statements, an internal P, inside the theory, that computes probabilities of coded versions of the statements inside the language, and a reflection principle that relates these two P's to each other.
And Löb's theorem is the result that if a (sufficiently complex, classical) system can prove that "a proof of Q implies Q" (often abbreviated as □Q → Q), then it can prove Q. What would be the probabilistic analogue? Let's use □aQ to mean P('Q')≥1-a (so that □0Q is the same as the old □Q; see this post on why we can interchange probabilistic and provability notions). Then Löb's theorem in a probabilistic setting could:
Probabilistic Löb's theorem: for all a<1, if the system can prove □aQ → Q, then the system can prove Q.
To understand this condition, we'll go through the proof of Löb's theorem in a probabilistic setting, and see if and when it breaks down. We'll conclude with an example to show that any decent reflective probability theory has to violate this theorem.
Consider a mixed system, in which an automated system is paired with a human overseer. The automated system handles most of the routine tasks, while the overseer is tasked with looking out for errors and taking over in extreme or unpredictable circumstances. Examples of this could be autopilots, cruise control, GPS direction finding, high-frequency trading – in fact nearly every automated system has this feature, because they nearly all rely on humans "keeping an eye on things".
But often the human component doesn't perform as well as it should do – doesn't perform as well as it did before part of the system was automated. Cruise control can impair driver performance, leading to more accidents. GPS errors can take people far more off course than following maps did. When the autopilot fails, pilots can crash their planes in rather conventional conditions. Traders don't understand why their algorithms misbehave, or how to stop this.
There seems to be three factors at work here:
- Firstly, if the automation performs flawlessly, the overseers will become complacent, blindly trusting the instruments and failing to perform basic sanity checks. They will have far less procedural understanding of what's actually going on, since they have no opportunity to exercise their knowledge.
- This goes along with a general deskilling of the overseer. When the autopilot controls the plane for most of its trip, pilots get far less hands-on experience of actually flying the plane. Paradoxically, less efficient automation can help with both these problems: if the system fails 10% of the time, the overseer will watch and understand it closely.
- And when the automation does fail, the overseer will typically lack situational awareness of what's going on. All they know is that something extraordinary has happened, and they may have the (possibly flawed) readings of various instruments to guide them – but they won't have a good feel for what happened to put them in that situation.
So, when the automation fails, the overseer is generally dumped into an emergency situation, whose nature they are going to have to deduce, and, using skills that have atrophied, they are going to have to take on the task of the automated system that has never failed before and that they have never had to truly understand.
And they'll typically get blamed for getting it wrong.
Similarly, if we design AI control mechanisms that rely on the presence of a human in the loop (such as tools AIs, Oracle AIs, and, to a lesser extent, reduced impact AIs), we'll need to take the autopilot problem into account, and design the role of the overseer so as not to deskill them, and not count on them being free of error.
Daenerys' Note: This is the last item in the LW Women series. Thanks to all who participated. :)
The following section will be at the top of all posts in the LW Women series.
Several months ago, I put out a call for anonymous submissions by the women on LW, with the idea that I would compile them into some kind of post. There is a LOT of material, so I am breaking them down into more manageable-sized themed posts.
Seven women replied, totaling about 18 pages.
Standard Disclaimer- Women have many different viewpoints, and just because I am acting as an intermediary to allow for anonymous communication does NOT mean that I agree with everything that will be posted in this series. (It would be rather impossible to, since there are some posts arguing opposite sides!)
To the submitters- If you would like to respond anonymously to a comment (for example if there is a comment questioning something in your post, and you want to clarify), you can PM your message and I will post it for you. If this happens a lot, I might create a LW_Women sockpuppet account for the submitters to share.
Please do NOT break anonymity, because it lowers the anonymity of the rest of the submitters.
No seriously. I've grown up and lived in the social circles where female privilege way outweigh male privilege. I've never been sexually assaulted, nor been denied anything because of my gender. I study a male-dominated subject, and most of my friends are polite, deferential feminism-controlled men. I have, however, been able to flirt and sympathise and generally girl-game my way into getting what I want. (Charming guys is fun!) Sure, there will eventually come a point where I'll be disadvantaged in the job market because of my ability to bear children; but I've gotta balance that against the fact that I have the ability to bear children.
In fact, most of the gender problems I personally face stem from biology, so there's not much I can do about them. It sucks that I have to be the one responsible for contraception, and that my attractiveness to men depends largely on my looks but the inverse is not true. But there's not much society can do to change biological facts, so I live with them.
I don't think it's a very disputed fact that women, in general, tend to be more emotional than men. I'm an INFJ, most of my (male) friends are INTJ. With the help of Less Wrong's epistemology and a large pinch of Game, I've achieved a fair degree of luminosity over my inner workings. I'm complicated. I don't think my INTJ friends are this complicated, and the complicatedness is part of the reason why I'm an "F": my intuitions system is useful. It makes me really quite good at people, especially when I can introspect and then apply my conscious to my instincts as well. I don't know how many of the people here are F instead of T, but for anyone who uses intuition a lot, applying proper rationality to introspection (a.k.a. luminosity) is essential. It is so so so easy to rationalise, and it takes effort to just know my instinct without rationalising false reasons for it. I'm not sure the luminosity sequence helps everyone, because everyone works differently, but just being aware of the concept and being on the lookout for ways that work is good.
There's a problem with strong intuition though, and that's that I have less conscious control over my opinions - it's hard enough being aware of them and not rationalising additional reasons for them. I judge ugly women and unsuccessful men. I try to consciously adjust for the effect, but it's hard.
Onto the topic of gender discussions on Less Wrong - it annoys me how quickly things gets irrational. The whole objectification debacle of July 2009 proved that even the best can get caught up in it (though maybe things have got better since 2009?). I was confused in the same way Luke was: I didn't see anything wrong with objectification. I objectify people all the time, but I still treat them as agents when I need to. Porn is great, but it doesn't mean I'm going to find it harder to befriend a porn star. I objectify Eliezer Yukowsky because he's a phenomenon on the internet more than a flesh-and-blood person to me, but that doesn't mean I'd have difficulty interacting with a flesh-and-blood Eliezer. On the whole, Less Wrong doesn't do well at talking about controversial topics, even though we know how to. Maybe we just need to work harder. Maybe we need more luminosity. I would love for Less Wrong to be a place where all things could just be discussed rationally.
There's another reason that I come out on a different side to most women in feminism and gender discussions though, and this is the bit I'm only saying because it's anonymous. I'm not a typical woman. I act, dress and style feminine because I enjoy feeling like a princess. I am most fulfilled when I'm in a M-dom f-sub relationship. My favourite activity is cooking and my honest-to-god favourite place in the house is the kitchen. I take pride in making awesome sandwiches. I just can't alieve it's offensive when I hear "get in the kitchen", because I'd just be like "ok! :D". I love sex, and I value getting better at it. I want to be able to have sex like a porn star. Suppressing my gag reflex is one of the most useful things I learned all year. I love being hit on and seduced by men. When I dress sexy, it is because male attention turns me on. I love getting wolf whistles. Because of luminosity and self-awareness, I'm ever-conscious of the vagina tingle. I'm aware of when I'm turned on, and I don't rationalise it away. And the same testosterone that makes me good at a male-dominated subject, makes sure I'm really easily turned on.
I understand that all these things are different when I'm consenting and I'm viewed as an agent and all that. But it's just hard to understand other girls being offended when I'm not, because it's much harder to empathise with someone you don't agree with. Not generalising from one example is hard.
Understanding other girls is hard.
Kevin Drum has an article in Mother Jones about AI and Moore's Law:
THIS IS A STORY ABOUT THE FUTURE. Not the unhappy future, the one where climate change turns the planet into a cinder or we all die in a global nuclear war. This is the happy version. It's the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. By 2040, computers the size of a softball are as smart as human beings. Smarter, in fact. Plus they're computers: They never get tired, they're never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge.
The result is paradise. Global warming is a problem of the past because computers have figured out how to generate limitless amounts of green energy and intelligent robots have tirelessly built the infrastructure to deliver it to our homes. No one needs to work anymore. Robots can do everything humans can do, and they do it uncomplainingly, 24 hours a day. Some things remain scarce—beachfront property in Malibu, original Rembrandts—but thanks to super-efficient use of natural resources and massive recycling, scarcity of ordinary consumer goods is a thing of the past. Our days are spent however we please, perhaps in study, perhaps playing video games. It's up to us.
Although he only mentions consumer goods, Drum presumably means that scarcity will end for services and consumer goods. If scarcity only ended for consumer goods, people would still have to work (most jobs are currently in the services economy).
Drum explains that our linear-thinking brains don't intuitively grasp exponential systems like Moore's law.
Suppose it's 1940 and Lake Michigan has (somehow) been emptied. Your job is to fill it up using the following rule: To start off, you can add one fluid ounce of water to the lake bed. Eighteen months later, you can add two. In another 18 months, you can add four ounces. And so on. Obviously this is going to take a while.
By 1950, you have added around a gallon of water. But you keep soldiering on. By 1960, you have a bit more than 150 gallons. By 1970, you have 16,000 gallons, about as much as an average suburban swimming pool.
At this point it's been 30 years, and even though 16,000 gallons is a fair amount of water, it's nothing compared to the size of Lake Michigan. To the naked eye you've made no progress at all.
So let's skip all the way ahead to 2000. Still nothing. You have—maybe—a slight sheen on the lake floor. How about 2010? You have a few inches of water here and there. This is ridiculous. It's now been 70 years and you still don't have enough water to float a goldfish. Surely this task is futile?
But wait. Just as you're about to give up, things suddenly change. By 2020, you have about 40 feet of water. And by 2025 you're done. After 70 years you had nothing. Fifteen years later, the job was finished.
He also includes this nice animated .gif which illustrates the principle very clearly.
Drum continues by talking about possible economic ramifications.
Until a decade ago, the share of total national income going to workers was pretty stable at around 70 percent, while the share going to capital—mainly corporate profits and returns on financial investments—made up the other 30 percent. More recently, though, those shares have started to change. Slowly but steadily, labor's share of total national income has gone down, while the share going to capital owners has gone up. The most obvious effect of this is the skyrocketing wealth of the top 1 percent, due mostly to huge increases in capital gains and investment income.
Drum says the share of (US) national income going to workers was stable until about a decade ago. I think the graph he links to shows the worker's share has been declining since approximately the late 1960s/early 1970s. This is about the time US immigration levels started increasing (which raises returns to capital and lowers native worker wages).
The rest of Drum's piece isn't terribly interesting, but it is good to see mainstream pundits talking about these topics.
"Because I'm not smart or imaginative enough" is a perfectly plausible answer, but I've been mulling this one over on-and-off for a few months now, and I haven't come up with a single example that really captures what I consider to be the salient features of the scenario: a tangled hierarchy of preferences, and exploitation of that tangled hierarchy by an agent who cyclically trades the objects in that hierarchy, generating trade surplus on each transaction.
It's possible that I am in fact thinking about money-pumping all wrong. All the nearly-but-not-quite examples I came up with (amongst which were bank overdraft fees, Weight Watchers, and exploitation of addiction) had the characteristics of looking like swindles or the result of personal failings, but from the inside, money-pumping must presumably feel like a series of gratifying transactions. We would want any cases of money-pumping we were vulnerable to.
At the moment, I have the following hypotheses for the poverty of real-world money-pumping cases:
- Money-pumping is prohibitively difficult. The conditions that need to be met are too specific for an exploitative agent to find and abuse.
- Money-pumping is possible, but the gains on each transaction are generally so small as to not be worth it.
- Humans have faculties for identifying certain classes of strategy that exploit the limits of their rationality, and we tell any would-be money-pumper to piss right off, much like Pascal's Mugger. It may be possible to money-pump wasps or horses or something.
- Humans have some other rationality boundary that makes them too stupid to be money-pumped, to the same effect as #3.
- Money-pumping is prevalent in reality, but is not obvious because money-pumping agents generate their surplus in non-pecuniary abstract forms, such as labour, time, affection, attention, status, etc.
- Money-pumping is prevalent in reality, but obfuscated by cognitive dissonance. We rationalise equivalent objects in a tangled preference hierarchy as being different.
- Money-pumping is prevalent in reality, but obscured by cognitive phenomena such as time-preference and discounting, or underlying human aesthetic/moral tastes, (parochial equivalents of pebble-sorting), which humans convince themselves are Real Things that are Really Real, to the same effect as #6.
Does anyone have anything to add, or any good/arguable cases of real-world money-pumping?
I know I said I'd be gone... but this was just a comment originally, and I noticed it may actually be relevant.
Elharo said in Munchkin Ideas:
Put as much money as you can afford into tax advantaged retirement accounts. In the U.S. that means 401K, 403b, IRA, SEP, etc.
I'm interested in the following:
Why should people invest in retirement? Or, instead, why should someone invest as much as most do in retirement.
Few facts that make it a boggling question for me:
You are 10% to 20% likely to die before you enjoy even your first retirement year.
People adjust much more to harsh economical conditions than they believe they would. They remain happy, as many studies by Seligman and others show.
People who retire are only happier as retirees if they retired by choice (I lost the paper, sorry).
Most people here live in rich countries - darn, hate to be the exception! - , and their state would happily provide them with at least the maximal retirement plan legal in my country (aprox 2000 dollars/month). And surely would provide them with double the minimal (about 200/month) if they needed.
If you have descendants, they may support you in case you are still alive, and if you are not rich enough to keep a house, you have a good excuse to be in company of loved ones (you have nowhere else to go).
Given all that, I have no clue what the whole fuss about retirement plans, and being 60% of a rich old person with a crappy body is all about, specially if you are in the grave.
I mean, in the cryopreservation chamber, of course.
Edit: A related question not worth its own post, but maybe worth discussing, is Should inheritance "jump" a generation. Everyone inheriting from grandparents, instead of parents? Just the abstract ethical question. Regardless of implementation procedure.
Related to: Scholarship: How to Do It Efficiently
There has been a slightly increased focus on the use of search engines lately. I agree that using Google is an important skill - in fact I believe that for years I have came across as significantly more knowledgeable than I actually am just by quickly looking for information when I am asked something.
However, There are obviously some types of information which are more accessible by Google and some which are less accessible. For example distinct characteristics, specific dates of events etc. are easily googleable1 and you can expect to quickly find accurate information on the topic. On the other hand, if you want to find out more ambiguous things such as the effects of having more friends on weight or even something like the negative and positive effects of a substance - then googling might leave you with some contradicting results, inaccurate information or at the very least it will likely take you longer to get to the truth.
I have observed that in the latter case (when the topic is less 'googleable') most people, even those knowledgeable of search engines and 'science' will just stop searching for information after not finding anything on Google or even before2 unless they are actually willing to devote a lot of time to find it. This is where my recommendation comes - consider doing a scholarly search like the one provided by Google Scholar.
And, no, I am not suggesting that people should read a bunch of papers on every topic that they discuss. By using some simple heuristics we can easily gain a pretty good picture of the relevant information on a large variety of topics in a few minutes (or less in some cases). The heuristics are as follows:
1. Read only or mainly the abstracts. This is what saves you time but gives you a lot of information in return and this is the key to the most cost-effective way to quickly find information from a scholary search. Often you wouldn't have immediate access to the paper anyway, however you can almost always read the abstract. And if you follow the other heuristics you will still be looking at relatively 'accurate' information most of the time. On the other hand, if you are looking for more information and have access to the full paper then the discussion+conclusion section are usually the second best thing to look at; and if you are unsure about the quality of the study, then you should also look at the method section to identify it's limitations.3
2. Look at the number of citations for an article. The higher the better. Less than 10 citations in most cases means that you can find a better paper.
3. Look at the date of the paper. Often more recent = better. However, you can expect less citations for more recent articles and you need to adjust accordingly. For example if the article came out in 2013 but it has already been cited 5 times this is probably a good sign. For new articles the subheuristic that I use is to evaluate the 'accuracy' of the article by judging the author's general credibilty instead - argument from authority.
4. Meta-analyses/Systematic Reviews are your friend. This is where you can get the most information in the least amount of time!
5. If you cannot find anything relevant fiddle with your search terms in whatever ways you can think of (you usually get better at this over time by learning what search terms give better results).
That's the gist of it. By reading a few abstracts in a minute or two you can effectively search for information regarding our scientific knowledge on a subject with almost the same speed as searching for specific information on topics that I dubbed googleable. In my experience scholarly searches on pretty much anything can be really beneficial. Do you believe that drinking beer is bad but drinking wine is good? Search on Google Scholar! Do you think that it is a fact that social interaction is correlated with happiness? Google Scholar it! Sure, some things might seem obvious to you that X but it doesn't hurt to search on google scholar for a minute just to be able to cite a decent study on the topic to those X disbelievers.
This post might not be useful to some people but it is my belief that scholarly searches are the next step of efficient information seeking after googling and that most LessWrongers are not utilizing this enough. Hell, I only recently started doing this actively and I still do not do it enough. Furthermore I fully agree with this comment by gwern:
My belief is that the more familiar and skilled you are with a tool, the more willing you are to reach for it. Someone who has been programming for decades will be far more willing to write a short one-off program to solve a problem than someone who is unfamiliar and unsure about programs (even if they suspect that they could get a canned script copied from StackExchange running in a few minutes). So the unwillingness to try googling at all is at least partially a lack of googling skill and familiarity.
A lot of people will be reluctant to start doing scholarly searches because they have barely done any or because they have never done it. I want to tell those people to still give it a try. Start by searching for something easy, maybe something that you already know from lesswrong or from somewhere else. Read a few abstracts, if you do not understand a given abstract try finding other papers on the topic - some authors adopt a more technical style of writing, others focus mainly on statistics, etc. but you should still be able to find some good information if you read multiple abstracts and identify the main points. If you cannot find anythinr relevant then move on and try another topic.
P.S. In my opinion, when you are comfortable enough to have scholarly searches as a part of your arsenal you will rarely have days when there is nothing to check for. If you are doing 1 scholarly search per month for example you are most probably not fully utilizing this skill.
1. By googleable I mean that the search terms are google friendly - you can relatively easily and quickly find relevant and accurate information.
2. If the people in question have developed a sense for what type of information is more accessible by google then they might not even try to google the less accessible-type things.
3. If you want to get a better and more accurate view on the topic in question you should read the full paper. The heuristic of mainly focusing on abstracts is cost-effective but it invariably results in a loss of information.
Hi all, I'm leaving Lesswrong for a few months to pursue a Masters, and this Text below will never be finished. It is just a story of what is it like to grow up outside where everything is going on, a country where humanities are sad and terrible, and people are fun, but not quite wise.
Original Summary: Two things (Note: Were going to) permeate this text, an autobiographical short account of what is it like to grow up far from where things are happening, and an outside view account of some of the people and institutions (MIRI,LW,Leverage Research, FHI,80k,GWWC) whom presently carry, as far as I can see, the highest expected value gamble of our time. I have visited all those institutions, and my account here should be considered just a biased, one subjective perspective data point, not a proper evaluation of those places. Other people who come from developing world countries might have interesting stories to tell, and I'd encourage them to do so (Pablo in Argentina, many in India, China and elsewhere)
(NOTE: There is nothing about the institutions here, only the growing up part was written by the time I decided to halt this writing)
Far away, across the sea
As is the case with most outliers, outcasts, and outsiders in general, a large amount of sociological facts were determinant of me being the first person in Brazil acquainted with the cluster of ideas to which the institutions mentioned belong. Jonatas, the other Brazilian who entered this world early on (2004), has a very similar story to tell. The prerequisites seem to have been: young, middle class, children of early adopters, inclined towards philosophy, living in a cosmopolitan area, with a particular disregard for authority (uncommon in Brazil), high IQ (aprox 4 SDs above Brazilian average) beginning to get stuck in a nonsense university system in the humanities. Due to expected income considerations and a large variance in income among Brazilians, most of the high IQ people go for Medicine, Engineering, Law and sometimes physical sciences. Thus many of the humanities become just signalling that you praise the right authorities (right here meaning whomever your advisor or professor was compelled to praise by his professor) and the cycle rolls on and on.
So I was left with good resources (time, curiosity, intellectual eagerness) and internet access. The web changed it all. It was hard to capture the signal among the noise in the intellectual world there, and my path was reading an interview in a magazine with this guy who thought so differently that he seemed amazing, a biogeographist is what the magazine called him (I had to invent a meaning for that), that was Jared Diamond. Then Guns Germs and Steel, and, buying books, waiting two months for them to come, I slowly built a foundational knowledge of the Third Culture people, those whom John Brockman currently gathers on The Edge website.
It seemed they were sensible and smart people, Dawkins, Dennett, Pinker, and many others. Yet in our closed country in the humanities, no one had any idea of what that was all about. Understandably, I frequently thought I was wrong, or crazy, since that is what others thought about me. The neodarwinians were a huge problem in the moral punishing intellectual world I was living, they were enough to make you an outcast, an untouchable perhaps. But they were not the worse, the worse was yet to come.
The worse was when I found Aubrey de Grey and Nick Bostrom. I should call those early years the schizophrenic ones, because only focusing all my brainpower in being schizophrenic could I possibly survive among my peers while considering the opinions and thoughts of those two individuals sensible and worthy. It has recently been pointed out in one the best posts here that:
Any idiot can tell you why death is bad, but it takes a very particular sort of idiot to believe that death might be good.
That very particular sort of idiot composes 98% of our humanities academy, the intelligence that is valued is the subtle and sophisticated one that makes small benefits salient while concealing obviously enormous costs, or the one that signals capacity while making the world a worse place.
At the young age of eighteen I was learning Freudian babble during the day, reading Russell at late afternoon, since he was both sensible and acceptable among my peers, being a 100 years old Lord, and subscribed to the Shock Level 4 email group controlled by Eliezer at night, noticing that something really big was going on and not having anyone around to talk about it. I'd be thinking about the Simulation Argument, and my friends would be thinking about what the teacher's password was for that particular behaviorist explanation that was discredited 70 years ago, and how they hated it because the Freudian alternative obviously felt right. It takes schizophrenia to survive in the wild.
The Path Became Smooth
Time went by, memes were spread and slowly but steadily it was possible to come out of the closet about a lot of my beliefs and thoughts. The classes on how to write ambiguous commentaries on Hegel didn't stop, but the sanity waterline was being raised, specially among my colleagues who were pursuing exact science degrees. 2008 was the shifting point, suddenly I met one other Transhumanist, and eventually a rationalist, and near the end an utilitarian. Schizophrenia was no longer that necessary. Fast forward to now 2013 and you have many of those ideas, such as Singularity, ending ageing, considering cognitive science a part of psychology, brain machine interface, etc... all on the cover page of major magazines and being topic of conversation on TV shows.
Some few people started actually caring about that. Meanwhile something else was growing, the Effective Altruist movement.
(here this abruptly finishes, and won't be continued)
Every dollar I spend on myself is a dollar that could go much farther if spent on other people. I can give someone else a year of healthy life for about $50  and there's no way $50 can do anywhere near that much to help me. I could go through my life constantly weighing every purchase against the good it could do, but this would make me miserable. So how do I accept that other people need my money more without giving up on being happy myself?
For me the key is to make most choices donation neutral. As money comes in I divide it into "money to give to the most effective charity" and "money to spend as I wish". How to divide it is a hard and distressing choice, but it's one I only have to make once a year. Then when deciding to buy something (socks, rent, phone, instruments, food) I know it's money that isn't getting given away regardless, so I don't have to feel constantly guilty about making tradeoffs with people's lives.
Julia and I have been using this system since 2009.  It's mostly worked well, but it's needed some additions. The main issue is that declining to spend money on yourself isn't the only way to trade off benefits to other people against costs to yourself. For example you could decide to be vegan, donate a kidney, or cash out your vacation days and give away the money. For ones that generate money directly (cashing out vacation) the solution is simple: that money goes into the pool that can't be given away. For ones that don't generate money you would convert them into money via the good you think they do. Take the most effective charity you know about, figure out how much you would need to give to them in order to have the same positive effect, and then move that amount of money from donations to self-spending. For example, I might estimate that giving $100 to the AMF does about as much good as being vegan for a year, so if I decided to go ahead with being vegan I would decrease my annual donations by $100 and allocate another $100 to spend on myself.
I may or may not decide that having another $X to spend on myself is better than sacrifice Y, but whichever way I decide I'm working to make myself as happy as possible for a given amount of doing good. It's not a choice that has additional lives saved weighing on either side of it.
(This doesn't deal with a potentially important category: things that only make you somewhat unhappy. For example, working a higher paying job you like less, or pushing yourself to host more effective altruism meetups than you'd really like to. I don't see how to deal with this, but I don't think it's been a problem so far.)
 Specifically, I can donate to the Against Malaria Foundation, which distributes anti-malarial nets. The main effect is averting deaths of children who will probably go on to live around 30 years once you take into account other things they might die from. This comes to about $75 per additional year of life. There are also many other people protected by the nets where it doesn't make the difference between life and death but helps them live healthier lives. That brings the $/DALY figure down to about $50.
 I also wrote about this approach in 2010 when it was much younger.
Before I make my main point, I want to acknowledge that curriculum development is hard. It's even harder when you're trying to teach the unteachable. And it's even harder when you're in the process of bootstrapping. I am aware of the Kahneman inside/outside curriculum design story. And, I myself have taught 200+ hours of my own computer science curricula to middle-school students. So this "open letter," is not some sort of criticism of CFAR's curriculum; It's a "Hey, check out this cool stuff eventually when you have time," letter. I just wanted to put all this out there, to possibly influence the next five years of CFAR.
Curriculum development is hard.
So, anyway, I don't personally know any of the people involved in CFAR, but I do know you're all great.
A case for developmental thinking
Below is an annotated bibliography of some of my personal touchstones in the development literature, books that are foundational or books that synthesize decades of research about the developmental aspects of entrepreneurial, executive, educational, and scientific thinking, as well as the developmental aspects of emotion and cognition. Note that this is personal, idiosyncratic, non-exhaustive list.
And, to qualify, I have epistemological and ontological issues with plenty of the stuff below. But some of these authors are brilliant, and the rest are smart, meticulous, and values-driven. Lots of these authors deeply care about empirically identifying, targeting, accelerating, and stabilizing skills ahead of schedule or helping skills manifest when they wouldn't have otherwise appeared at all. Quibbles and double-takes aside, there is lots of signal, here, even if it's not seated in a modern framework (which would of course increase the value and accessibility of what's below).
There are clues or even neon signs, here, for isolating fine-grained, trainable stuff to be incorporated into curricula. Even if an intervention was designed for kids, a lot of adults still won't perform consistently prior to said intervention. And these researchers have spent thousands of collective hours thinking about how to structure assessments, interventions, and validations which may be extendable to more advanced scenarios.
So all the material below is not only useful for thinking about remedial or grade-school situations, and is not just for adding more tools to a cognitive toolbox, but could be useful for radically transforming a person's thinking style at a deep level.
child:adult :: adult: ?
This has everything to do with the "Outside the Box" Box. Really. One author below has been collecting data for decades to attempt to describe individuals that may represent far less than one percent of the population.
0. Protocol analysis
Everyone knows that people are poor reporters of what goes on in their heads. But this is a straw. A tremendous amount of research has gone into understanding what conditions, tasks, types of cognitive routines, and types of cognitive objects foster reliable introspective reporting. Introspective reporting can be reliable and useful. Grandaddy Herbert Simon (who coined the term "bounded rationality") devotes an entire book to it. The preface (I think) is a great overview. I wanted to mention this, first, because lots of the researchers below use verbal reports in their work.
1. Developmental aspects of scientific thinking
Deanna Kuhn and colleagues develop and test fine-grained interventions to promote transfer of various aspects of causal inquiry and reasoning in middle school students. In her words, she wants to "[develop] students' meta-level awareness and management of their intellectual processes." Kuhn believes that inquiry and argumentation skills, carefully defined and empirically backed, should be emphasized over specific content in public education. That sounds like vague and fluffy marketing-speak, but if you drill down to the specifics of what she's doing, her work is anything but. (That goes for all of these 50,000 foot summaries. These people are awesome.)
David Klahr and colleagues emphasize how children and adults compare in coordinated searches of a hypothesis space and experiment space. He believes that scientific thinking is not different in kind than everyday thinking. Klahr gives an integrated account of all the current approaches to studying scientific thinking. Herbert Simon was Klahr's dissertation advisor.
2. Developmental aspects of executive or instrumental thinking
Ok, I'll say it: Elliot Jacques was a psychoanalyst, among other things. And the guy makes weird analogies between thinking styles and truth tables. But his methods are rigorous. He has found possible discontinuities in how adults process information in order to achieve goals and how these differences relate to an individuals "time horizon," or maximum time length over which an individual can comfortably execute a goal. Additionally, he has explored how these factors predictably change over a lifespan.
3. Developmental aspects of entrepreneurial thinking
Saras Sarasvathy and colleagues study the difference between novice entrepreneurs and expert entrepreneurs. Sarasvathy wants to know how people function under conditions of goal ambiguity ("We don't know the exact form of what we want"), environmental isotropy ("The levers to affect the world, in our concrete situation, are non-obvious"), and enaction ("When we act we change the world"). Herbert Simon was her advisor. Her thinking predates and goes beyond the lean startup movement.
"What effectuation is not" http://www.effectuation.org/sites/default/files/research_papers/not-effectuation.pdf
4. General Cognitive Development
Jane Loevinger and colleagues' work have inspired scores of studies. Loevinger discovered potentially stepwise changes in "ego level" over a lifespan. Ego level is an archaic-sounding term that might be defined as one's ontological, epistemological, and metacognitive stance towards self and world. Loevinger's methods are rigorous, with good inter-rater reliability, bayesian scoring rules incorporating base rates, and so forth.
Here is a woo-woo description of the ego levels, but note that these descriptions are based on decades of experience and have a repeatedly validated empirical core. The author of this document, Susanne Cook-Greuter, received her doctorate from Harvard by extending Loevinger's model, and it's well worth reading all the way through:
Here is a recent look at the field:
By the way, having explicit cognitive goals predicts an increase in ego level, three years later, but not an increase in subjective well-being. (Only the highest ego levels are discontinuously associated with increased wellbeing.) Socio-emotional goals do predict an increase in subjective well-being, three years later. Great study:
Bauer, Jack J., and Dan P. McAdams. "Eudaimonic growth: Narrative growth goals predict increases in ego development and subjective well-being 3 years later." Developmental Psychology 46.4 (2010): 761.
5. Bridging symbolic and non-symbolic cognition
Eugene Gendlin and colleagues developed a "[...] theory of personality change [...] which involved a fundamental shift from looking at content [to] process [...]. From examining hundreds of transcripts and hours of taped psychotherapy interviews, Gendlin and Zimring formulated the Experiencing Level variable. [...]"
The "focusing" technique was designed as a trainable intervention to influence an individual's Experiencing Level.
Marion N. Hendricks reviews 89 studies, concluding that [I quote]:
- Clients who process in a High Experiencing manner or focus do better in therapy according to client, therapist and objective outcome measures.
- Clients and therapists judge sessions in which focusing takes place as more successful.
- Successful short term therapy clients focus in every session.
- Some clients focus immediately in therapy; Others require training.
- Clients who process in a Low Experiencing manner can be taught to focus and increase in Experiencing manner, either in therapy or in a separate training.
- Therapist responses deepen or flatten client Experiencing. Therapists who focus effectively help their clients do so.
- Successful training in focusing is best maintained by those clients who are the strongest focusers during training.
http://www.amazon.com/Self-Therapy-Step-By-Step-Wholeness-Cutting-Edge-Psychotherapy/dp/0984392777/ [IFS is very similar to focusing]
http://www.amazon.com/Emotion-Focused-Therapy-Coaching-Clients-Feelings/dp/1557988811/ [more references, similar to focusing]
http://www.amazon.com/Experiencing-Creation-Meaning-Philosophical-Psychological/dp/0810114275/ [favorite book of all time, by the way]
6. Rigorous Instructional Design
Siegfried Engelmann (http://www.zigsite.com/) and colleagues are dedicated to dramatically accelerating cognitive skill acquisition in disadvantaged children. In addition to his peer-reviewed research, he specializes in unambiguously decomposing cognitive learning tasks and designing curricula. Engelmann's methods were validated as part of Project Follow Through, the "largest and most expensive experiment in education funded by the U.S. federal government that has ever been conducted," according to Wikipedia. Engelmann contends that the data show that Direct Instruction outperformed all other methods:
Here, he systematically eviscerates an example of educational material that doesn't meet his standards:
And this is his instructional design philosophy:
In conclusion, lots of scientists have cared for decades about describing the cognitive differences between children, adults, and expert or developmentally advanced adults. And lots of scientists care about making those differences happen ahead of schedule or happen when they wouldn't have otherwise happened at all. This is a valuable and complementary perspective to what seems to be CFAR's current approach. I hope CFAR will eventually consider digging into this line of thinking, though maybe they're already on top of it or up to something even better.
In an erratum to my previous post on Pascalian wagers, it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235 (apparently even the giant heap of unrefined uranium bricks in Chicago Pile 1 was, functionally, empty space with a scattering of U235 dust). If this is the case then Fermi's estimate of a "ten percent" probability of nuclear weapons may have actually been justifiable because nuclear weapons were almost impossible (at least without particle accelerators) - though it's not totally clear to me why "10%" instead of "2%" or "50%" but then I'm not Fermi.
We're all familiar with examples of correct scientific skepticism, such as about Uri Geller and hydrino theory. We also know many famous examples of scientists just completely making up their pessimism, for example about the impossibility of human heavier-than-air flight. Before this occasion I could only think offhand of one other famous example of erroneous scientific pessimism that was not in defiance of the default extrapolation of existing models, namely Lord Kelvin's careful estimate from multiple sources that the Sun was around sixty million years of age. This was wrong, but because of new physics - though you could make a case that new physics might well be expected in this case - and there was some degree of contrary evidence from geology, as I understand it - and that's not exactly the same as technological skepticism - but still. Where there are sort of two, there may be more. Can anyone name a third example of erroneous scientific pessimism whose error was, to the same degree, not something a smarter scientist could've seen coming?
I ask this with some degree of trepidation, since by most standards of reasoning essentially anything is "justifiable" if you try hard enough to find excuses and then not question them further, so I'll phrase it more carefully this way: I am looking for a case of erroneous scientific pessimism, preferably about technological impossibility or extreme difficulty, where it seems clear that the inverse case for possibility would've been weaker if carried out strictly with contemporary knowledge, after exploring points and counterpoints. (So that relaxed standards for "justifiability" will just produce even more justifiable cases for the technological possibility.) We probably should also not accept as "erroneous" any prediction of technological impossibility where it required more than, say, seventy years to get the technology.
"A group blog, More Right is a place to discuss the many things that are touched by politics that we prefer wouldn’t be, as well as right wing ideas in general. It grew out of the correspondences among like minded people in late 2012, who first began their journey studying the findings of modern cognitive science on the failings of human reasoning and ended it reading serious 19th century gentlemen denouncing democracy. Surveying modernity, we found cracks in its façade. Findings and seemingly correct ideas, carefully bolted down and hidden, met with disapproving stares and inarticulate denunciation when unearthed. This only whetted our appetites. Proceeding from the surface to the foundations, we found them lacking. This is reflected in the spirit of the site."
This is meant as a rough collection of five ideas of mine on potential anti-Pascal Mugging tactics. I don't have much hope that the first three will be any useful at all and am afraid that I'm not mathematically-inclined enough to know if the last two are any good even as a partial solution towards the core problem of Pascal's Mugging -- so I'd appreciate if people with better mathematical credentials than mine could see if any of my intuitions could be formalizable in a useful manner.
0. Introducing the problem (this may bore you if you're aware of both the original and the mugger-less form of Pascal's Mugging)
First of all the basics: Pascal's Mugging in its original form is described in the following way:
- Now suppose someone comes to me and says, "Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^3 people."
This is the "shallow" form of Pascal's mugging, which includes a person that (almost certainly) is attempting to deceive the prospective AI. However let's introduce some further statements similar to the above, to avoid particular objections that might be used in some (even shallower) attempted rebuttals:
- "Give me five dollars, and I'll use my magic powers from outside the Matrix to increase the utility of every human being by 3^^^^3 utilons" (a supposedly positive trade rather than a blackmailer's threat)
- "I'm an alien in disguise - unless you publicly proclaim allegiance to your insect overlords, we will destroy you then torture all humanity for 3^^^^3 years" (a prankster asks for something which might be useful to an actual alien, but on a material-level not useful to a human liar)
- "My consciousness has partially time-travelled from the future into the past, and one of the few tidbits I remember is that it would be of effectively infinite utility if you asked everyone to call you Princess Tutu." (no trade offered at all, seemingly just a statement of epistemic belief)
- Says the Devil "It's infinitely bad to end that song and dance
And I won't tell you why, and I probably lie, but can you really take that chance?"
Blaise fills with trepidation as his calculations all turn out the devil's way.
And they say in the Paris catacombs, his ghost is fiddlin' to this day.
I think these are all trivial variations of this basic version of Pascal's Mugging: The utility a prankster derives from the pleasure of successfully pranking the AI wouldn't be treated differently in kind to the utility of 5 dollars -- nor is the explicit offer of a trade different than the supposedly free offer of information.
The mugger-less version is on the other hand more interesting and more problematic. You don't actually need a person to make such a statement -- the AI, without any prompting, can assign prior probabilities to theories which produce outcomes of positive or negative value vastly greater than their assigned improbabilities. I've seen its best description in the comment by Kindly and the corresponding response by Eliezer:
Kindly: Very many hypotheses -- arguably infinitely many -- can be formed about how the world works. In particular, some of these hypotheses imply that by doing something counter-intuitive in following those hypothesis, you get ridiculously awesome outcomes. For example, even in advance of me posting this comment, you could form the hypothesis "if I send Kindly $5 by Paypal, he or she will refrain from torturing 3^^^3 people in the matrix and instead give them candy." Now, usually all such hypotheses are low-probability and that decreases the expected benefit from performing these counter-intuitive actions. But how can you show that in all cases this expected benefit is sufficiently low to justify ignoring it?
Eliezer Yudkowsky: Right, this is the real core of Pascal's Mugging [...]. For aggregative utility functions over a model of the environment which e.g. treat all sentient beings (or all paperclips) as having equal value without diminishing marginal returns, and all epistemic models which induce simplicity-weighted explanations of sensory experience, all decisions will be dominated by tiny variances in the probability of extremely unlikely hypotheses because the "model size" of a hypothesis can grow Busy-Beaver faster than its Kolmogorov complexity.
The following list five ideas of mine, ordered as least-to-most-promising in the search for a general solution. Though I considered them seriously initially, I no longer really think that (1) (2) or (3) hold any promise, being limited, overly specific or even plain false -- I nonetheless list them for completeness' sake, to get them out of my head and in case anyone sees something in them that could potentially be the seed of something better. I'm slightly more hopeful for solutions (4) or (5) -- they feel to me intuitively as if they may be leading to something good. But I'd need math that I don't really have to prove or disprove it.
1. The James T. Kirk solution
To cut to the punchline: the James T. Kirk solution to Pascal's Mugging is "What does God need with a starship?"
Say there's a given prior possibility P(X=Matrix Lord) that any given human being is a Matrix Lord with the power to inflict 3^^^3 points of utility/disutility. The fact that such a being with such vast power seemingly wants five dollars (or a million dollars, or to be crowned Queen of Australia), makes it actually *less* likely that such a being is a Matrix Lord.
We don't actually need the vast unlikely probabilities to illustrate the truth of this. Let's consider an AI with a security backdoor -- it's known for a fact than there's one person in the world which has been given a 10-word passkey that can destroy the AI at his will. (The AI is also disallowed from attempting to avoid such penalty by e.g. killing the person in question).
So let's say the prior probability for any given person being the key keeper in question is "1 in 7 billion"
Now Person X says to the AI. "Hey, I'm the key keeper. I refuse to give you any evidence to the same, but I'll destroy you if you don't give me 20 dollars."
Does this make Person X more or less likely to be the key keeper? My own intuition tells me "less likely".
Unfortunately, one fundamental problem with the above syllogism is that at best it can tell us that it's only the muggerless version that we need fear. Useless for any serious purpose.
2. The presumption of unfriendliness
This is most obviously the method that, in the examples above, poor Blaise should have used to defeat the devil's claim of infinite badness. In a universe where ending 'that song and dance' can be X-bad, the statement should also be considered that it could be X-bad to NOT end it, or indeed X-good to do end it. The devil (being a known malicious entity) is much more likely to push Pascal towards doing what would result to the infinite badness. And indeed in the fictional evidence provided by the song, that's indeed what the devil achieves: to harm Blaise Pascal and his ghost for an arbitrarily long time - by warning against the same and using Pascal's calculations against him.
Blaise's tactic should have been not to obey the devil's warning, nor to even do the opposite than his suggestion (since the devil could be smart enough to know how to use reverse psychology), but rather to ignore him as much as possible: Blaise should end the song and dance at the point in time he would have done if he wasn't aware of the devil's statement.
All the above is obvious for cartoonish villains like the devil -- known malicious agents who are known to have a utility function opposed to ours -- and a Matrix Lord who is willing to torture 3^^^3 people for the purpose of getting 5 dollars is probably no better; better to just ignore them. But I wonder: Can't a similar logic be used in handling most any agents with utility functions that are merely different than one's own (which is the vast number of agents in mindspace)?
Moreover a thought that occurs: Doesn't it seem likely that for any supposed impact X, the greater the promised X, the less likely two different minds are both positively inclined towards it? So for any supposed impact X, shouldn't the presumption of unfriendliness (incompatibility in utility functions) increase in like measure ?
3. The Xenios Zeus.
This idea was inspired from the old myth about Zeus and Hermes walking around pretending to be travellers in need, to examine which people were hospitable and which were not. I think there may exist similar myths about other gods in other mythologies.
Let's say that each current resident has a small chance (not necessarily the same small chance) of being a Matrix Lord willing to destroy the world and throw a temper tantrum that'll punish 3^^^3 people if you don't behave properly according to what he considers proper. Much like each traveller has a chance of being Zeus.
One might think that you might have to examine the data very closely to figure out which random person has the greatest probability of being Zeus -- but that rather fails to get the moral of the myth, which isn't "figure out who is secretly Zeus" but rather "treat everyone nicely, just in case". If someone does not reveal themselves to be a god, then they don't expect to be treated like a god, but might still expect human decency.
To put it in LW analogous terms one might argue that an AI could treat the value system of even Matrix Lords as roughly centered around the value system of human beings -- so that by serving the CEV of humanity, it would also have the maximum chance of pleasing (or at least not angering) any Matrix Lords in question.
Unfortunately in retrospect I think this idea of mine is, frankly, crap. Not only is it overly specific and again seems to treat the surface problem rather than the core problem, but I realized it reached the same conclusion as (2) by asserting the exact opposite -- the previous idea made an assumption of unfriendliness, this one makes an assumption of minds being centered around friendliness. If I'm using two contradictory ideas to lead to the same conclusion, it probably indicates that this is a result of having written the bottom line -- not of making an actually useful argument.
So not much hope remains in me for solutions 1-3. Let's go to 4.
4. The WWBBD Principle. (What Would a Boltzman Brain Do?)
If you believed with 99% certainty that you were a Boltzman Brain, what should you do? The smart thing would probably be: Whatever you would do if you weren't a Boltzman Brain. You can dismiss the hypotheses where you have no control over the future; because it's only the ones where you have control that matter on a decision-theoretic basis.
Calculations of future utility have a discounting factor naturally built into them -- which is the uncertainty of being able to calculate and control such a future properly. So in a very natural (no need to program it in) manner an AI would prefer the same utility for 5 seconds in the future rather than 5 minutes in the future, and for 5 minutes in the future rather than 5 years in the future.
This looks at first glance as a time-discount, but in actuality it's an uncertainty-discount. So an AI that had a very good predictive capacity would be able to discount future utility less; because the uncertainty would be less. But the uncertainty would never be quite zero.
So even as the thought of 3^^^3 lives outweighs the tiny probability; couldn't it be that a similar factor punishes it to an opposite direction, especially when dealing with hypotheses in which the AI will be able to have no further control? I don't know. Bring in the mathematicians.
5. The Law of Visible Impact (a.k.a. The Generalized Hanson)
Robin Hanson's suggested solution to Pascal's Mugging has been the penalization of "the prior probability of hypotheses which argue that we are in a surprisingly unique position to affect large numbers of other people who cannot symmetrically affect us."
I have to say that I find this argument unappealing and unconvincing. One problem I have with it is that it seems to treat the concept of "person" as ontologically fundamental -- it's an objection I kinda have also against the Simulation argument and the Doomsday Argument.
Moreover wouldn't this argument cease to apply if I was merely witnessing the Pascal's mugging taking place, and that therefore if I was merely witnessing I should be hoping for the mugged entity to submit? This sounds nonsensical.
But I think Hanson's argument can be modified so here I'd like to offer what I'll call the Generalized Hanson: Penalize the prior probability of hypotheses which argue for the existence of high impact events whose consequences nonetheless remain unobserved.
If life's creation is easy, why aren't we seeing alien civilizations consuming the stars? Therefore most likely life's creation isn't easy at all.
If the universe allowed easy time-travel, where are all the time-travellers? Hence the world most likely doesn't allow easy time-travel.
If Matrix Lords exist that are apt to create 3^^^3 people and torture them for their amusement, why aren't we being tortured (or witnessing such torture) right now? Therefore most likely such Matrix Lords are rare enough to nullify their impact.
In short the higher the impact of a hypothetical event, the more evidence we should be expecting to see for it in the surrounding universe -- the non-visibility of such provides therefore evidence against the hypothesis analogous to the extent of such hypothetical impact.
I'm probably expressing the above intuition quite badly, but again: I hope someone with actual mathematical skills can take the above and make it into something useful; or tell me that it's not useful as an anti-Pascal Mugging tactic at all.
I've been thinking about this a bit recently, and thought I'd do a dump of evidence and conjecture, and see what Less Wrong had to say.
There are lots of areas of life where activities can be partitioned into either producing products or consuming products. For those areas, it may be worthwhile to calculate one's Produce / Consume Ratio (PCR) and also contemplate what the optimal PCR is for that area.
This post is a minor note, to go along with the post on the probabilistic Löb theorem. It simply seeks to justify why terms like "having probability 1" are used interchangeably with "provable" and why implications symbols "→" can be used in a probabilistic setting.
From A and A→B, deduce B.
Having a single rule of inference isn't much of a restriction, because you can replace other rules of inference ("from A1,A2,... and An, deduce B") with an axiom or axiom schema ("A1∧A2∧...∧An → B") and then use modus ponens on that axiom to get the other rule of inference.
In this logical system, I'm now going to make some purely syntactical changes - not changing the meaning of anything, just the way we write things. For any sentence A that doesn't contain an implication arrow →, replace
A with P(A)=1.
Similarly, replace any sentence of the type
A → B with P(B|A)=1.
This is recursive, so we replace
(A → B) → C with P(C | P(B|A)=1 )=1.
From P(A)=1 and P(B|A)=1, deduce P(B)=1.
I find Eliezer's explanation of what "should" means to be unsatisfactory, and here's an attempt to do better. Consider the following usages of the word:
- You should stop building piles of X pebbles because X = Y*Z.
- We should kill that police informer and dump his body in the river.
- You should one-box in Newcomb's problem.
All of these seem to be sensible sentences, depending on the speaker and intended audience. #1, for example, seems a reasonable translation of what a pebblesorter would say after discovering that X = Y*Z. Some might argue for "pebblesorter::should" instead of plain "should", but it's hard to deny that we need "should" in some form to fill the blank there for a translation, and I think few people besides Eliezer would object to plain "should".
Normativity, or the idea that there's something in common about how "should" and similar words are used in different contexts, is an active area in academic philosophy. I won't try to survey the current theories, but my current thinking is that "should" usually means "better according to some shared, motivating standard or procedure of evaluation", but occasionally it can also be used to instill such a standard or procedure of evaluation in someone (such as a child) who is open to being instilled by the speaker/writer.
It seems to me that different people (including different humans) can have different motivating standards and procedures of evaluation, and apparent disagreements about "should' sentences can arise from having different standards/procedures or from disagreement about whether something is better according to a shared standard/procedure. In most areas my personal procedure of evaluation is something that might be called "doing philosophy" but many people apparently do not share this. For example a religious extremist may have been taught by their parents, teachers, or peers to follow some rigid moral code given in their holy books, and not be open to any philosophical arguments that I can offer.
Of course this isn't a fully satisfactory theory of normativity since I don't know what "philosophy" really is (and I'm not even sure it really is a thing). But it does help explain how "should" in morality might relate to "should" in other areas such as decision theory, does not require assuming that all humans ultimately share the same morality, and avoids the need for linguistic contortions such as "pebblesorter::should".
Returned to original title, for the good reasons given here
There was a recent post in Discussion which at time of this writing held staggering 454 commentaries, which inclined me to write an evolutionary psychology and social endocrinology derived post on courtship, and Mating Intelligence, to share some readings on recent discussions and evidence coming from those areas. I've been meaning to do this for a while, and a much longer version could have been written, with more specific case studies and citations and an academic outlook, yet I find this abridged personal version more adequate for Lesswrong. In no area more disclaimers are desirable than when speaking about evolutionary drives for mating. It touches emotions, gender issues, morality, societal standards, and it speaks of topics that make people shy, embarrassed, angry and happy on a weekly basis, so I'll begin with a few paragraphs of disclaimers.
I'll try to avoid saying anything that I can remember having read in a Pick Up Artist book, and focus on using less known mating biases to help straight women and men find what they look for in different contexts. This post won't work well for same-gender seduction. If you object irrevocably to evolutionary psychology, just so stories, etc... I suggest you refrain from commenting, and also reading, why bother?
Words of caution on reading people (me included) talking about evolutionary psychology, specially when applied to current people: Suspicious about whether there is good evidence for it? Read this first, then if you want Eliezer on the evolutionary-cognitive difference, and this if your feminist taste buds activate negatively. If you never heard of Evolutionary Psychology (which includes 8 different bodies of data to draw from), check also an Introduction with Dawkins and Buss.
When I say "A guy does D when G happens" please read: "There are statistically significant, or theoretically significant reasons from social endocrinology, or social and evolutionary psychology to believe that under circumstances broadly similar to G, human males, on average, will be inclined towards behaving in manners broadly similar to the D way. Also, most tests are made with western human males, tests are less than 40 years old, subject to publication bias, and sometimes done by people who don't understand math well enough to do their statistics homework, they have not been replicated several times, and they are less homogenous than physics, because psychology is more complex than physics."
If you couldn't care less for theory, and just want the advice, go to the Advice Session.
Thusfar in Evolutionary Psychology it seems that our genes come equipped with two designs that become activated through environmental cues to think about mating.
Knowing this is becoming mainstream. The state of the art term is Mating Intelligence, and it has these two canonical modes that can be activated, depending on factors as diverse as being informed that X is leaving town in two days, and detecting X's level of testosterone, accounting for his height and status, and calculating whether his genes are worth more or less than his future company. If you choose to read the linked books, then you'll delve in this much deeper than I have, so stop reading this, and write a post of your own afterwards.
I'll list some main misconceptions, then suggest how to use either the misconceptions, or the theory mentioned while explaining them to optimize for whatever you want from the opposite gender individuals at a particular moment.
Misconception 1: Guys do Short-term, Girls do Long-term, unless they don't have this option.
This is false. Guys are very frequently pair bonded, most times even before women are, both have oxytocin levels going up after sex, and both have high levels of oxytocin during relationships. Girls only have less frequent causal intercourse because it is hard to find males worthy of the 2 year raising a baby period, or in the case in which they are pair-bonded already, because of the risk of the cuckolded "father" leaving, fighting her, or recognizing the baby ain't his. Obviously, no one's brain has managed to completely catch up with condoms and open relationships yet.
Misconception 2: Women go for the bad guys (if I remember my American Pie's correctly, also called jocks in US) and good guys, nerds, and conventionals are left last.
'Bad guys' is a popular name for high testosterone, risk taking, little routine individuals. And indeed when a woman's short-term mating intelligence program is activated, which happens particularly when she is ovulating and young (even when she's close married/relationshiped) she does exhibit a preference for such types. When optimizing for long-term partners, the reverse is true.
Misconception 3: Guys just go for looks, Girls just go for status.
Toned down reality: Guys in short-term mating mode go for looks, Girls in long-term mating mode care substantially for the difference between lower than average status and average status, then marginal utility decreases and more status is defeated by other desirable traits.
Women in short-term mode do not optimize for status, they'll take a bus-boy who shows through size, melanin, symmetry and chin that he survived local pathogens despite his high testoterone, she's after resistant genes, not resources. Men in long term mode still optimize for looks, but not that much, kindness and emotional stability take over when marginal returns for more beauty start subsiziding.
Misconception 4: When genders optimize for Status, Status=Money.
Unlike all known primate and cetacean species, Humans daily deal with being high, low, and medium status in different hierarchical situations. This should be as obvious as not to be worth mentioning, but sadly there are strong media incentives, and for some reason I don't understand well strong reasons within English and American culture to pretend that women go for status, status=money, therefore women go for money, and men should make more money. It may be a selection effect, the societies that financially took over the world believed that being financially powerful was the best way to get laid, or marry. It may just be that marketing these things together (using sexy women to sell cars) created a long-term pavlovian association. Fact is that it unfortunately happened, and people believe it, despite it being false. Women who begin believing it sometimes force themselves into doing it even more.
Status has no universal measure. If you met someone in Basketball team, status will be how good that person is plus their game attitude. If in a class at university, maybe it will be how well spoken the person is in the relevant topic. Status can be how much food the person usually shares with groups, or how much they can ask for others without being very apologetic. It can be how many women sleep with a man, or how many he can afford to reject. It can be how many purses a woman has, or how she can show thrift and a sense of belonging to a community that identifies as anti-consumerist. Some minds assign status based on location of birth, race, hair color etc... (In my city, Japanese women, all the 400.000, are commonly assumed to be high status). Finally, men do optimize for the trait people think as status, explained below, in long-term mates.
Even in the case where status plays the largest role, women when activating long-term reasoning, status is only one factor out of four multiplicants that are important for the same reason, and detected, in a prospective male mate:
Kindness*Dependability*(Ambition-Age)*Status = How many resources a man is expected to share with you and your hypothetical kids.
And this does not even begin to account for any physical trait, nor intelligence, humour, energy levels etc... If you take one thing out of this text, take this: Make your beliefs about what status is pay rent. Test if status is what people think it is, or something that only roughly correlates with that. Sophisticate your status modules, they may have been corrupted.
Misconception 5: Once you learn what your mind is doing when it selects mates, you should make it get better at that.
Let's begin by reaffirming the obvious: We live in a world that has nothing to do with savannahs where our minds spent a long time. We can access thousands, if not millions of people, during a lifetime. We have condoms and contraceptives. We live in an era of abundance compared to any other time in history, and in societies so large, that the moral norms constraining what "everyone will know" do not apply anymore.
So the last thing you want to do is to make your mind really sharp and accurate when judging a potential mate through its natural algorithms. What you want to do, to the extent that it is possible, is to override your algorithms with something that is better, and better is one of these two things:
1) Increasing your likelihood of mating with the individual (or class of individuals) you want to mate with in a matched time-horizon (long if you want long, for instance).
2) Enlarging the scope of individuals you want to mate with to include more people you actually do, will or can get to know.
To give better advice, I'll first mention general advice anyone can use, and then specific advice for the four quadrants. For those who will say this is the Dark Arts, I say it would be if we lived in a Savannah without condoms, heating, medicine, houses or internets. Now it looks to me more like causing one-self, and one's beloved, to be more epistemically rational.
Women, be confident: If you are a woman, be more confident, way more confident, when approaching a guy, don't be aggressive, just safe, you mind is tuned with who knows how many trigger devices that may make you afraid of a no, of being thought of as slutty, of losing face, and of the guy not raising your kids. Discount for all that, twice. Don't do it if everyone really will know, or if you actually want kids from that guy.
Use your best horizon features: If you have a trait that the other gender optimizes for more in short-term, lure them by acting short-term, even if later you'll attempt to raise their oxytocin to the long-term point. If you have goods and ills on both time horizons, switch back and forth until you grasp what they want.
Discount for population size: There are two ways of doing that, one is to reason to yourself "I may not be as attractive as Natalie Portman or Brad Pitt, but our minds are tuned to trying to get the best few achievable mates out of a group of 100-1000, not of hundreds of millions, so I do stand a very good chance" The other is nearly opposite: "I may think that I should only marry a prince, or sleep with Iron Man, but in fact my world is much smaller than this, and my mind will be totally okay to mate with Adam, that cool guy."
Be hedonistic: For men and women alike, the main way evolution got us into intercourse was by making it fun. The reasons it got us out are related to unlikelihood of leaving great-grandchildren, energy waste, disease, and lowered status. Of those, only a subset of lowered status is still significant in a world full of condoms. Other than women when aiming at long-term only, everyone is completely under-calibrated for sex, since we substantially reduced the risks without reducing the hedonic benefits nearly as much.
Use fetishes and peculiarities: There are things each particular person is attracted to more than everyone else (for me that's freckles, red/orange/blue/purple hair, upper back, and short women). Use that in your favour, less competition, as simple as that.
Go places: There are better and worse places to find mates. Short-terming males (a temporary condition in which any male may find himself, not a kind of male) abound in dancing clubs, military facilities and sports areas, not to mention OkCupid. Long-terming females (same) abound on courses and classes of yoga, dancing, cooking, languages, etc... Long-terming males usually have more of a routine, so are more frequent on saturdays and fridays than on a tuesday late evening, they'll be more frequent wherever no one naturally would go to find a one night stand, or in groups that are preselected for strong emotions (low thresholds for falling in love) Short-terming females may exist in dancing clubs, bars and other related areas, but are very high value due to comparative scarcity when in these areas, someone looking for them is better off in groups with a small majority of women, where social tension and hierarchies don't scale up in either gender.
Note: The advice is about things you should do in addition to what you naturally tend to do in those situations, you already have the algorithms, and should just improve calibration, unless when explicited, the suggestion is not to substitute what you naturally tend to do, or this would be a book all by itself explaining 4 kinds of human courtship.
For Long-terming Men: Stop freaking out about financial status. Find a place where you are among the great ones in something, specially kindness, dependability, physical constitution, and symmetry which guys think of less frequently than Successful startups or Tennis worldchampions. If you are hot, use short-term, women are particularly more prone to switching from short to long-term. Get a dog, show you are able and willing to take care of something unspeakably cute and adorable. Be ambitious in your projects, show passion. While ambitious and passionate, also make sure she realizes (truly) that you notice things about her no one else does, find out her values, talk about shared ones, and be non aggressively curious about all of them. Show her kindness in small gestures that need not cost a lot, such as time consuming hand-made presents. Test OkCupid and see if it works for you. Memorize details about her personality, assure her you can be loving specifically to her. Postpone sex a little bit. May sound hard, but is a reliable indicator that you won't change her for the next that quickly. Rationally override any emotion you may have regarding her sexual behavior, show you are not agressive and jealous, thus making her "(be) (a)lieve unconsciously" that you will not kill her in an assault of hatred when she sleeps with hypothetical another man whose child will never exist and get some years of schooling from you. If you think you can tell the wheat from the chaff, separate the PUA stuff that works for long-term, if not, read softer confidence/influence/seduction material. Use oxytocin inducing media (TV series and romantic movies). Rest assured, there are more women looking for long-term men than the opposite, aid the odds by going places. Show sympathy, kindness (to others as well) and dependability whenever you can.
For Long-terming Women: If you've been convinced by financial status gospel, stop freaking out about it. If you just account for the 4 factors in the equation above, you'll be way ahead of everyone within the gospel trance, then there are still all the other things you look for in a guy, which by themselves are very important. Sure, a classic indicator is how much other women in your social group like him, and, good as it is, it is defined in terms of competition, try to discount this one, after all, it is partially just made of a conformity bias, a bad bias to have when looking for a long-term mate. Be very nice and kind, and almost silly near the guy. The kinds of guys who are Long-terming most of the time are those who won't approach you that frequently. Also, older guys obviously have less chaos on in their minds and lives, so are more likely to want to settle down for a few years. Postpone sex in proportion to how much you suspect the guy is Short-terming. The importance of this cannot be overstated. By postponing sex (and sex alone) you make sure Short-termers still have a good reason to be around you until suddenly there is a hormonal overload and they fall in love with you (not that romantic, but mildly accurate), love's trigger is activated by many factors, when they sum above a threshold. The most malleable of these factors is time investment, give a guy mixed short long signals, and you'll increase likelihood of surpassing the threshold. Also, give known guys a second chance, many times your algorithms friendzoned (sorry for the term) them for reasons as silly as "he didn't touch me the first time we met, and I didn't feel his smell, because the table was wide" or "That day I was in Short-term mode and this other guy had more easily detectable attractive features, leaving John on the omega mental slot". Forget romantic comedies and princess tales where your role is passive. A man's love is actively conquered by a woman, you are the one who will fight dragons - frequently RPG dragons - for the guy in the beggining, not the opposite, the opposite comes later as a prize.
For Short-terming Guys: Read Pick Up Artist books, actually do the exercises, as in don't find excuses for why you can't, do them. Don't do anything that disgusts you morally, which may be nearly all of it, but do all the rest. Other than that?... Some few things, very few indeed, were left out of those books. Optimize more than anything for your fetishes and specific desires to avoid competition. Use mildly tense situations which can be confounded with arousal (narrow bridges get you more dates than wide bridges). Woman's attractiveness peaks at approximately 1,73cm 5 feet 8 inches, shorter women are more likely to have had less home stability and developmental stability when young, which triggers more frequent short-terming, looking for testosterone indicators (square chin, prominent forehead, and specially having a ring-finger longer than index-finger) also helps, and it is fun because you can claim to read hands and actually make good predictions out of it.
For Short-terming Girls: I'll start with easy stuff, and escalate quickly to extremely high probability even in tough cases, such as he's not on the mood, tired, really shy, or (you think) not excited. Quite likely the main obstacle is inside your mind, not your clothes, either fear of rejection, or fear of reputational cost or something else. Be confident. Few guys will reject a subtle, feminine, discrete and firm sex "offer" (notice how language itself puts it). Look at him, smile, touch him while you speak, look intensely at his mouth while slowly approaching, make sure to try do this where he is unlikely to be paying some reputational cost (not on his aunt's marriage). If feeling clumsy, mention you do. When short-terming, men really do optimize for looks, so decrease light levels, and avoid available-female company, like asking him out to check a bookstore, or to see a movie. Sit near him while touching him, cut the conversation at some point, kiss him (remember to do that where neither of you may get embarrassed with anyone else). Before, talk about sexuality naturally and imagetically, say how it is important to you to be embraced, desired, enticed, penetrated, transformed inside, and arise re-energized the next day to go back to your life. If you are sure he is short-terming, make yourself scarce by mentioning time constraints. Carry condoms and pick them up while making up if he is still hesitant whether you want sex or not. But be cozy and reassure him "It's okay" if it feels like he nervous. If you are confortable with that, use the web, there are tons of Short-terming guys, and if you feel embarassed to meet a man who would reject you, you are safeguarded by being filtered beforehand through your pictures and description or by the bang with friends app. On the web, be upfront about your intentions, and assure them you are not a scam/bot/adv. When almost there, if he is not excited, it is not because you are not attractive to him, don't be passive, slowly touch and rub his genital, quite likely he's just nervous and you are disputing against his sympathetic system, when you and the parasympathetic win, he'll be excited and relaxed, and the party is on. If you live in a large urban area, go to swing places alone or with acquaintances, not friends - nowhere else there will be that many guys willing to have sex right there, right now, and the necessary infrastructure for it, in a safe environment with security guards, other high-class women etc... to make sure you are not getting into trouble - In short, guarantee situations in which neither him nor you pay reputational costs, be active yet reassuring, lower light levels, avoid competition and make sure there is infrastructure for the act.
The saying goes that you can't achieve happiness by trying to be happy (thought you can if you optimize for happiness, i.e. by reading positive psychology and acting on it). To some extent, it is also true that a lot of what goes on during courtship does not take place while actively and consciously focusing on courtship. It is one thing to keep those misconceptions and advices in mind, and a whole different thing to be obsessed about them and use them as cognitive canonical maxims for behaving, the point of writing this is to help, if it stops being helpful, stop using it.
Edit: Scrambled sources:
I recently published Mortal, a novella-length My Little Pony fanfiction meant to introduce anti-death concepts to an unfamiliar audience. Short description:
Twilight Sparkle's friends have lived long and happy lives. Now their time is coming to an end, but Rainbow Dash, at least, will not go gently. Twilight has the power to save her friend's life. Is it worth violating the natural order?
This is a character-driven melodrama. It's not particularly rationalist, but it's very, very transhumanist. Unlike, say, Friendship is Optimal, I wouldn't necessarily recommend this one to people who don't already know the source. It assumes familiarity with the characters and the world.
I am going to talk about how I put together the story and how people reacted to it. This will contain spoilers.
This line exists so you can break out of the automatic "read everything on the page" mode if you want to avoid the spoilers.
This story was structured as something of a bait-and-switch. I watched the reaction to a previous transhumanist horsefic (yes, there's more than one), and I was struck by how easily readers matched the explicitly anti-death narrative to the "immortality is a curse" trope. Rather than fight against this trend, I decided to work with it. The first act is meant to look like a story about learning to accept the inevitability of death. Starting in chapter 3, I break further and further away from that mold until the protagonists finally rebel against the status quo.
The first chapters got a lot of people invested who I suspect would've been turned off by a less familiar opening. Once I was into the third act, I stopped being subtle and used every trick in the book to make the pro-death characters look like the unreasonable ones. Judging by the comments, there's no shortage of readers who were angry at having their expectations flouted, but quite a few seem thoughtful, and some explicitly changed their mind on the subject.
"Hypercomputation" is a term coined by two philosophers, Jack Copeland and Dianne Proudfoot, to refer to allegedly computational processes that do things Turing machines are in principle incapable of doing. I'm somewhat dubious of whether any of the proposals for "hypercomputation" are really accurately described as computation, but here, I'm more interested in another question: is there any chance it's possible to build a physical device that answers questions a Turing machine cannot answer?
I've read a number of Copeland and Proudfoot's articles promoting hypercomputation, and they claim this is an open question. I have, however, seen some indications that they're wrong about this, but my knowledge of physics and computability theory isn't enough to answer this question with confidence.
Some of the ways to convince yourself that "hypercomputation" might be physically possible seem like obvious confusions, for example if you convince yourself that some physical quality is allowed to be any real number, and then notice that because some reals are non-computable, you say to yourself that if only we could measure such a non-computable quantity then we could answer questions no Turing machine could answer. Of course, the idea of doing such a measurement is physically implausible even if you could find a non-computable physical quantity in the first place. And that mistake can be sexed up in various ways, for example by talking about "analog computers" and assuming "analog" means it has components that can take any real-numbered value.
Points similar to the one I've just made exist in the literature on hypercomputation (see here and here, for example). But the critiques of hypercomputation I've found tend to focus on specific proposals. It's less clear whether there are any good general arguments in the literature that hypercomputation is physically impossible, because it would require infinite-precision measurements or something equally unlikely. It seems like it might be possible to make such an argument; I've read that the laws of physics are consiered to be computable, but I don't have a good enough understanding of what that means to tell if it entails that hypercomputation is physically impossible.
Can anyone help me out here?
A common mistake people make with utility functions is taking individual utility numbers as meaningful, and performing operations such as adding them or doubling them. But utility functions are only defined up to positive affine transformation.
Talking about "utils" seems like it would encourage this sort of mistake; it makes it sound like some sort of quantity of stuff, that can be meaningfully added, scaled, etc. Now the use of a unit -- "utils" -- instead of bare real numbers does remind us that the scale we've picked is arbitrary, but it doesn't remind us that the zero we've picked is also arbitrary, and encourages such illegal operations as addition and scaling. It suggests linear, not affine.
But there is a common everyday quantity which we ordinarily measure with an affine scale, and that's temperature. Now, in fact, temperatures really do have an absolute zero (and if you make sufficient use natural units, they have an absolute scale, as well), but generally we measure temperature with scales that were invented before that fact was recognized. And so while we may have Kelvins, we have "degrees Fahrenheit" or "degrees Celsius".
If you've used these scales long enough you recognize that it is meaningless to e.g. add things measured on these scales, or to multiply them by scalars. So I think it would be a helpful cognitive reminder to say something like "degrees utility" instead of "utils", to suggest an affine scale like we use for temperature, rather than a linear scale like we use for length or time or mass.
The analogy isn't entirely perfect, because as I've mentioned above, temperature actually can be measured on a linear scale (and with sufficient use of natural units, an absolute scale); but the point is just to prompt the right style of thinking, and in everyday life we usually think of temperature as an (ordered) affine thing, like utility.
As such I recommend saying "degrees utility" instead of "utils". If there is some other familiar quantity we also tend to use an affine scale for, perhaps an analogy with that could be used instead or as well.
Jonathan Birch recently published an interesting critique of Bostrom's simulation argument. Here's the abstract:
Nick Bostrom’s ‘Simulation Argument’ purports to show that, unless we are confident that advanced ‘posthuman’ civilizations are either extremely rare or extremely rarely interested in running simulations of their own ancestors, we should assign significant credence to the hypothesis that we are simulated. I argue that Bostrom does not succeed in grounding this constraint on credence. I first show that the Simulation Argument requires a curious form of selective scepticism, for it presupposes that we possess good evidence for claims about the physical limits of computation and yet lack good evidence for claims about our own physical constitution. I then show that two ways of modifying the argument so as to remove the need for this presupposition fail to preserve the original conclusion. Finally, I argue that, while there are unusual circumstances in which Bostrom’s selective scepticism might be reasonable, we do not currently find ourselves in such circumstances. There is no good reason to uphold the selective scepticism the Simulation Argument presupposes. There is thus no good reason to believe its conclusion.
The paper is behind a paywall, but I have uploaded it to my shared Dropbox folder, here.
EDIT: I emailed the author and am glad to see that he's decided to participate in the discussion below.
I'm reading Nurture Shock by Po Bronson & Ashley Merryman. Several things in the book, esp. the chapter on "Tools of the Mind", an intriguing education program, suggest that our education of young children not only isn't very good even when evaluated using tests that the curriculum was designed for, it's worse than just letting kids play. (My analogy and interpretation—don't blame this on the Tools people—is that conventional education may be like a Soviet five-year plan, trying to force children to acquire skills & knowledge that they would have been motivated to learn on their own if there weren't a school, and that early education shouldn't focus entirely on teaching specific facts, but also on teaching how to think, form abstractions, and control impulses.)
Say they're going to play fireman. The Tools teacher teaches the kids about what firemen do and what happens in a fire, and gives the kids different roles to play, then lets them play. They teach facts not because the facts are important, but to make the play session longer and more complicated. Tools does well in increasing test scores, but even better at reducing disruptive behavior. 
Tools has a variety of computer games that are designed to get kids to exercise particular cognitive skills, like focusing on something while being aware of background events. But the games often sound like more-boring ways of teaching kids the same things that video-games teach them.
Tools did no better than the existing curriculum on certain metrics in a recent larger study. But it didn't perform worse, either.
The first study you do with any biological intervention is to compare the intervention to a control group that has no intervention. But in education, AFAIK no one has ever done this. Everyone uses the existing curriculum as the control.
Whatever country you're in, what metrics do you use, and what evidence do you have that your schools are better than nothing at all?
There may be some things that you need to sit kids down and force them to learn—say, arithmetic, math, and typing—but I kinda doubt it's more than 20% of the grade school curriculum. I spent a lot of time practicing penmanship, futilely trying to memorizing the capitals and chief exports of all fifty states, and studying the history of Thanksgiving and the American Revolution over and over again. We could have a short-hours classroom hours control group, where kids spend a few hours a day learning those few facts they need to know, and the rest of the time playing.
 I fear somebody is going to complain that disruptive behavior is what we need to teach children so they can innovate and question authority. Open to discussion, but if it worked that way, we'd be overwhelmed with innovators and independent thinkers today.
 I actually learned the names of all the states from a song, and learned where they are from a jigsaw puzzle.
I've been looking for reliable evidence of a claim I've heard a few times. The claim is that the closing of sweatshops (by anti-globalization activists) has resulted in many of the child workers becoming prostitutes. The idea is frequently proffered as an example of do-gooder foolishness ignoring basic economics and screwing people over.
However, despite searching for a while, I can't find anything to indicate that this actually happened.
Some guy at the Library of Economics and Liberty mentions it here:
In one famous 1993 case U.S. senator Tom Harkin proposed banning imports from countries that employed children in sweatshops. In response a factory in Bangladesh laid off 50,000 children. What was their next best alternative? According to the British charity Oxfam a large number of them became prostitutes.
But in the article, Paul Krugman mentions the Oxfam study without citation:
In 1993, child workers in Bangladesh were found to be producing clothing for Wal-Mart, and Senator Tom Harkin proposed legislation banning imports from countries employing underage workers. The direct result was that Bangladeshi textile factories stopped employing children. But did the children go back to school? Did they return to happy homes? Not according to Oxfam, which found that the displaced child workers ended up in even worse jobs, or on the streets -- and that a significant number were forced into prostitution.
I looked at some Oxfam stuff, but couldn't find the study.
A similar claim is made in The Race to the Top: The Real Story of Globalization by Tomas Larsson (go here and use the search tool for the word 'prostitution'), but doesn't mention the Oxfam study:
Keith E. Maskus, an economist at the University of Colorado, has studied the issue... He concludes that... "The celebrated French ban of soccer balls sewn in Pakistan for the World Cup in 1998 resulted in significant dislocation of children from employment. Those who tracked them found that a large proportion ended up begging and/or in prostitution,"
I looked for a paper or something by Maskus but came up empty.
I was taught this fact at a Poli Sci class in college, but I'm starting to think it's more likely to be an information cascade. Can anyone do a better job than me?
Thanks in advance.
How long can you work on making a routine task more efficient before you're spending more time than you save?
Of course, he's not the first person to ask this question, but the visual is handy. Note that the times are calculated assuming you'll save the time over five years.
For example, I've been pondering how to shorten my showers. If I can shave off 1 minute daily, I should be willing to invest up to (but no more than) an entire day to do it. If I think I can shave off five minutes preparing my breakfast, I should be willing to spend up to six days attempting to do so.
The following section will be at the top of all posts in the LW Women series.
Several months ago, I put out a call for anonymous submissions by the women on LW, with the idea that I would compile them into some kind of post. There is a LOT of material, so I am breaking them down into more manageable-sized themed posts.
Seven women replied, totaling about 18 pages.
Standard Disclaimer- Women have many different viewpoints, and just because I am acting as an intermediary to allow for anonymous communication does NOT mean that I agree with everything that will be posted in this series. (It would be rather impossible to, since there are some posts arguing opposite sides!)
To the submitters- If you would like to respond anonymously to a comment (for example if there is a comment questioning something in your post, and you want to clarify), you can PM your message and I will post it for you. If this happens a lot, I might create a LW_Women sockpuppet account for the submitters to share.
Please do NOT break anonymity, because it lowers the anonymity of the rest of the submitters.
The class that a lot of creepiness falls into for me is not respecting my no. Someone who doesn't respect a small no can't be trusted to respect a big one, when we're alone and I have fewer options to enforce it beside physical strength. Sometimes not respecting a no can be a matter of omission or carelessness, but I can't tell which.
While I'm in doubt, I'm not assuming the worst of you, but I'm on edge and alertly looking for new data in a way that's stressful for me and makes it hard for either of us to enjoy the encounter. And I'm sure as heck not going anywhere alone with you.
I've written up some short anecdotes that involved someone not respecting or constraining a no. They're at a range of intensities.
Joining someone for the first time and sitting down in a spot that blocks their exit from the conversation. Sometimes unavoidable (imagine joining people at a booth) but limits my options to exit and enforce a no.
Blocking an exit less literally by coming across as the kind of person who can't end a conversation (follows you between circles at a party, limits your ability to talk to other people, etc).
Asking for a number instead of offering yours. If I want to call you, I will, but when you ask for my number, I can't stop you calling or harassing me in the future.
Asking for a number while blocking my exit. This has happened to me in cabs when I take them late at night. It's bad to start with because I can't exit a moving car and I can't control the direction it's going in. One driver waited to the end of the ride, asked for my number, and then handed my reciept back and demanded it when I didn't comply. I had to write down a fake one to get out without escalating. This is why I'm torn between walking through a deserted part of town or taking a cab alone at night.
Talking about other girls who gave you "invalid" nos. Anything on the order of "She was flirting with me all night and then she wouldn't put out/call me back/meet for coffee." Responding positively to you is not a promise to do anything else, and it's not leading you on. This kind of assumption is why I'm a little hesitant to be warm to a strange guy if I'm in a place where it would be hard to enforce a no.
Withholding information to constrain my no. The culprit here was a girl and the target was a friend of mine. The two of them had gone on a date and set a time to meet again and possibly have sex. The girl had a boyfriend, but was in some kind of open relationship and had informed my friend of this fact. What she didn't disclose was that the boyfriend was back in town the night of their second date. She waited to reveal that until my friend had turned up. My friend still had the power to say no, and did, but there was nothing preventing the girl from disclosing that data earlier, when my friend could have postponed or demurred by text. Waiting til she'd already shlepped to the apartment put more pressure on her. It suggested the girl would rather rig the game than respect a no.
Overstepping physical boundaries and then assigning the blame to me. You might go for a kiss in error or touch me in a way I'm not comfortable with. Say sorry and move on. Don't say, "You looked like you wanted to be kissed." That implies my no is less valid if you're confused.
Follow-up To: Reinforcement and Short-Term Rewards as Anti-Akratic
I'm still working on cleaning up my scheduling system for release, like I mentioned in the comments to my last post. However, I managed to forget the end of my college semester, which is taking up a distressing amount of my time. So, although progress is being made, I'm not done quite yet and probably won't be until sometime after my final exams end on the 16th. In the meantime, I'm going to explain my scheduling system and some of the modifications I've made to it.
My system is derived from the Pomodoro Technique. In it, work is separated into individual 25-minute blocks also called "Pomodoros." To ensure that blocks last for the full 25 minutes, they're timed; once the timer has started, the block should not be uninterrupted until the timer runs out. There's a short break between each Pomodoro; after several Pomodoros, there's a longer break.
The biggest benefit I've noticed from using my system is in fixing my problems with task switching. When I was doing something I didn't much like, I used to think about doing something else almost constantly; it usually wasn't long before I stopped working to do something else. The original Pomodoro Method solved this problem by forcing me to wait until the timer had expired to stop working. However, I had another problem with task switching that the original Pomodoro System didn't touch. When I was slacking off, I could sit contented for hours without doing anything else; I found it hard to start working or stop slacking off. That's where my changes came in. These problems are both very similar; in this one, I change tasks too infrequently, where in the other, I changed tasks too often. It stands to reason, then, that they could both be solved the same way: by timing them. So, in my system, everything I do is treated like work is under the Pomodoro System, even slacking off.
That's the biggest change my system makes: everything is a block (or a Pomodoro), and I'm in a block all the time. However, my system is more than just a few rule tweaks. My system is computerized; I use a web application for my block timer, as well as for managing my task list and the various other add-ons my system has. I've also made a number of more subtle decisions that better adapt the system for computerization.
Like in the Pomodoro System, my system times each block of work I do. After the work period ends (usually 25 minutes), my system enters a 5-minute break period. During this break period, I preload my next task into the system so that I can start working as soon as the break ends, without having to futz with the timer. If I forget to preload a task, my system doesn't start anything automatically; I'm just left outside of a block, which I consider to be a failure state that I always try to avoid.
My system also integrates a task list; to start a block, I must choose my task from the list. This also helps to improve my productivity. Because I choose tasks from a list of all my potential activities, it's easier to find and select tasks with higher activation energy, instead of falling back on cached procrastination. Forcing me to select a task from a list also makes me explicitly consider what I ought to be doing with my time.
A web application is nice, but there are a lot of things about it that, on its own, make it a bit less useful than the traditional timer. It doesn't ring, for instance, and I have to open it up every time I want to check how much time I have left. So, I built an application that runs on another computer on my desk that handles all of those things. It rings a digital gong when the current timer ends. It shows me whether I'm in a break, in a task, or if my task has expired by changing the color of the screen. It displays in text the current task, some information about it, and how much time is left on the timer. Right now, this is a fairly bare-bones terminal application; one of the things I'm working on in my current revision is making it look a bit nicer.
Of course, my extrinsic motivator from my previous article is tied into this system as well. Simply put, it rewards me with candy for keeping on track with my schedule. The rules it follows are more precisely explained in its own article. I'm rewriting the rules, however; expect a new article about them in a few weeks.
Even the best scheduling system in the world would be of no use if I couldn't bring myself to follow it. That's what my browser plugin is for. When I don't have a block timer active, or if I'm trying to access a non-productive web site during a productive block, my browser plugin will block the site and tell me to go start a block. I can still override the plugin, but the plugin requires me to wait 10 seconds before I get the option. Since most of my procrastination time is spent on the Internet, the plugin is an effective way of reminding me to turn the system back on.
Since my goal is to keep the system on at all times, it's a bit problematic that many of real-world tasks don't divide neatly into Pomodoro-sized chunks. These are things like eating dinner, walking the dog, or sleeping. In order to track them, my system has a category of "real-world" tasks which run for an indefinite amount of time. However, such a task would seem open to abuse; in order to prevent that, my browser plugin blocks my access to the Internet during them, just as if I weren't in a block at all.
My original plans for the system included things like reports on time usage and a system to help me calibrate my expectations for the amount of time a task is likely to take. However, I've yet to implement any of these, and honestly I'm still not sure what the best way to implement these would be. Any interesting suggestions would be appreciated; I hope to write an article about building these systems sometime soon.
When a group of people talk to each other a lot they develop terms that they can use in place of larger concepts. This makes it easier to talk to people inside the group, but then it's harder to talk about the same ideas with people outside the group. If we were smart enough to keep up fully independent vocabularies where we would always use the right words for the people we were talking to, this wouldn't be an issue. But instead we get in the habit of saying weird words, and then when we want to talk to people who don't know those words we either struggle to find words they know or waste a lot of time introducing words. Especially when the group jargon term offers only a minor advantage over the non-jargon phrasing I think this is a bad tradeoff if you also want to speak to people outside the group.
Recently I've been working on using as little jargon as possible. Pushing myself to speak conventionally, even when among people who would understand weird terms a little faster, can be frustrating, but I think I'm also getting better at it.
I also posted this on my blog
In the book "How to Measure Anything" D. Hubbard presents a step-by-step method for calibrating your confidence intervals, which he has tested on hundreds of people, showing that it can make 90% of people almost perfect estimators within half a day of training.
I've been told that the Less Wrong and CFAR community is mostly not aware of this work, so given the importance of making good estimates to rationality, I thought it would be of interest.
(although note CFAR has developed its own games for training confidence interval calibration)
The main techniques to employ are:
Pros and cons:
It makes a lot of sense for the Google people to be transhumanist, with Sergey Brin and Larry Page working with the Singularity University, but still I was surprised to hear this on the new Colbert Report (of the 23rd of April):
Colbert: Can I live forever?
Schmidt: But not now. They need to invent some more medicine.
Colbert: So I can live forever, but later. So I just need to live long enough for later to become now.
Schmidt: But your digital identity will live forever. Because there's no delete button.
Colbert: On me?
Schmidt: That's correct.
Colbert: That's profound.
He seemed quite serious, too.
I guess a lot of people would take transhumanism more seriously if they heard the top people at Google are in. To me, I actually find it makes Google seem more trustworthy. In-group psychology is weird.
Here's another good interview with Eric Schmidt. No explicit transhumanism, but some fairly intense plans entirely compatible with it.
(edited: corrected title)
This article is just some major questions concerning morality, then broken up into sub-questions to try to assist somebody in answering the major question; it's not a criticism of any morality in particular, but rather what I hope is a useful way to consider any moral system, and hopefully to help people challenge their own assumptions about their own moral systems. I don't expect responses to try to answer these questions; indeed, I'd prefer you don't. My preferred responses would be changes, additions, clarifications, or challenges to the questions or to the objective of this article.
First major question: Could you morally advocate other people adopt your moral system?
This isn't as trivial a question as it seems on its face. Take a strawman hedonism, for a very simple example. Is a hedonist's pleasure maximized by encouraging other people to pursue -their- pleasure? Or would it be better served by convincing them to pursue other people's (a class of people of which our strawman hedonist is a member) pleasure?
It's not merely selfish moralities which suffer meta-moral problems. I've encountered a few near-Comtean altruists who will readily admit their morality makes them miserable; the idea that other people are worse off than them fills them with a deep guilt which they cannot resolve. If their goal is truly the happiness of others, spreading their moral system is a short-term evil. (It may be a long-term good, depending on how they do their accounting, but non-moral altruism isn't actually a rare quality, so I think an honest accounting would suggest their moral system doesn't add much additional altruism to the system, only a lot of guilt about the fact that not much altruistic action is taking place.)
Note: I use the word "altruism" here in its modern, non-Comtean sense. Altruism is that which benefits others.
Does your moral system make you unhappy, on the whole? Does it, like most moral systems, place a value on happiness? Would it make the average person less or more happy, if they and they alone adopted it? Are your expectations of the moral value of your moral system predicated on an unrealistic scenario of universal acceptance? Maybe your moral system isn't itself very moral.
Second: Do you think your moral system makes you a more moral person?
Does your moral system promote moral actions? What percentage of your actions concerning your morality are spent feeling good because you feel like you've effectively promoted your moral system, rather than promoting the values inherent in it?
Do you behave any differently than you would if you operated under a "common law" morality, such as social norms and laws? That is, does your ethical system make you behave differently than if you didn't possess it? Are you evaluating the merits of your moral system solely on how it answers hypothetical situations, rather than how it addresses your day-to-day life?
Does your moral system promote behaviors you're uncomfortable with and/or could not actually do, such as pushing people in the way of trolleys to save more people?
Third: Does your moral system promote morality, or itself as a moral system?
Is the primary contribution of your moral system to your life adding outrage that other people -don't- follow your moral system? Do you feel that people who follow other moral systems are immoral even if they end up behaving in exactly the same way you do? Does your moral system imply complex calculations which aren't actually taking place? Is the primary purpose of your moral system encouraging moral behavior, or defining what the moral behavior would have been after the fact?
Considered as a meme or memeplex, does your moral system seem better suited to propagating itself than to encouraging morality? Do you think "The primary purpose of this moral system is ensuring that these morals continue to exist" could be an accurate description of your moral system? Does the moral system promote the belief that people who don't follow it are completely immoral?
Fourth: Is the major purpose of your morality morality itself?
This is a rather tough question to elaborate with further questions, so I suppose I should try to clarify a bit first: Take a strawman utilitarianism where "utility" -really is- what the morality is all about, where somebody has painstakingly gone through and assigned utility points to various things (this is kind of common in game-based moral systems, where you're just accumulating some kind of moral points, positive or negative). Or imagine (tough, I know) a religious morality where the sole objective of the moral system is satisfying God's will. That is, does your moral system define morality to be about something abstract and immeasurable, defined only in the context of your moral system? Is your moral system a tautology, which must be accepted to even be meaningful?
This one can be difficult to identify from the inside, because to some extent -all- human morality is tautological; you have to identify it with respect to other moralities, to see if it's a unique island of tautology, or whether it applies to human moral concerns in the general case. With that in mind, when you argue with other people about your ethical system, do they -always- seem to miss the point? Do they keep trying to reframe moral questions in terms of other moral systems? Do they bring up things which have nothing to do with (your) morality?
Michael Vassar, former president of the Singularity Institute and current Chief Science Officer of MetaMed, is currently visiting Europe and wants to meet up with Less Wrongers there. His schedule is:
25 April: Berlin
29 April: Estonia
8 May: London
12 May: Oslo
16 May: Nice (but may be able to meet people in Paris?)
26 May: Home to USA
If you have a meetup group in or near one of these cities, or you can put some people together, he's interested in talking about the Singularity, optimal philanthropy, and his work with MetaMed. You can reach him at michael.vassar[at]gmail.com
View more: Next