I've often had half-finished LW post ideas and crossed them off for a number of reasons, mostly they were too rough or undeveloped and I didn't feel expert enough. Other people might worry their post would be judged harshly, or feel overwhelmed, or worried about topicality, or they just want some community input before adding it.

So: this is a special sort of open thread. Please post your unfinished ideas and sketches for LW posts here as comments, if you would like constructive critique, assistance and checking from people with more expertise, etc. Just pile them in without worrying too much. Ideas can be as short as a single sentence or as long as a finished post. Both subject and presentation are on topic in replies. Bad ideas should be mined for whatever good can be found in them. Good ideas should be poked with challenges to make them stronger. No being nasty!

The ideas you're not ready to post
New Comment
264 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The Dilbert Challenge: you are working in a company in the world of Dilbert. Your pointy-haired boss comes to you with the following demand:

"One year from today, our most important customer will deliver us a request for a high-quality reliable software system. Your job and the fate of the company depends on being able to develop and deploy that software system within two weeks of receipt of the specifications. Unfortunately we don't currently know any of the requirements. Get started now."

I submit that this preposterous demand is really a deep intellectual challenge, the basic form of which arises in many different endeavors. For example, it's reasonable to believe that at some point in the future, humanity will face an existential threat. Given that we will not know the exact nature of that threat until it's almost upon us, how can we prepare for it today?

8cousin_it
Wow. I'm a relatively long-time participant, but never really "got" the reasons why we need something like rationality until I read your comment. Here's thanks and an upvote.
3thomblake
That's one of the stated objectives of computer ethics (my philosophical sub-field) - to determine, in general, how to solve problems that nobody's thought of yet. I'm not sure how well we're doing at that so far.
3[anonymous]
deleted
[-]MBlume270

On the Care and Feeding of Rationalist Hardware

Many words have been spent here in improving rationalist software -- training patterns of thought which will help us to achieve truth, and reliably reach our goals.

Assuming we can still remember so far back, Eliezer once wrote:

But if you have a brain, with cortical and subcortical areas in the appropriate places, you might be able to learn to use it properly. If you're a fast learner, you might learn faster - but the art of rationality isn't about that; it's about training brain machinery we all have in common

Rationality does not require big impressive brains any more than the martial arts require big bulging muscles. Nonetheless, I think it would be rare indeed to see a master of the martial arts willfully neglecting the care of his body. Martial artists of the wisest schools strive to improve their bodies. They jog, or lift weights. They probably do not smoke, or eat unhealthily. They take care of their hardware so that the things they do will be as easy as possible.

So, what hacks exist which enable us to improve and secure the condition of our mental hardware? Some important areas that come to mind are:

  • sleep
  • diet
  • practice
6Vladimir_Golovin
I'd definitely want to read about a good brain-improving diet (I have no problems with weight, so I'd prefer not to mix these two issues).
5AngryParsley
I agree. LW doesn't have many posts about maintaining and improving the brain. I would also add aerobic exercise to your list, and possibly drugs. For example, caffeine or modafinil can help improve concentration and motivation. Unfortunately they're habit-forming and have various health effects, so it's not a simple decision.
4randallsquared
I've only had modafinil once (but it was amazing in the concentration-boosting department), but I have a lot of experience with caffeine, and the effects are primarily mood-affecting, for me. Large amounts of caffeine destroy concentration, offsetting any improvements, and, like other drugs, the effect grows weaker the longer you take it. On the plus side, caffeine is only weakly addicting, so you can just stop every now and then to reset things, which I do every few months.
3Drahflow
While we are at it: * caffeine * meditation * music * mood * social interaction Also, which hacks are available to better interface our mental hardware with the real world: * information presentation * automated information filtering
2blogospheroid
Increasing the level of fruit in my diet helped me maintain a positive mood for longer. I tried it when i was in alone for a while in a foreign country, so i'm not sure if it was a placebo affect.
1jimmy
Piracetam and other "nootropics" are worth checking out. Piracetam supposedly helps with memory and cognition by increasing blood flow to the brain or something... I got some to play around with and will let you guys know if anything interesting happens.
2[anonymous]
Piracetam works by influencing acetylcholine. Vinpocetine and ginko-biloba are examples of vasodilators (work by increasing blood flow to the brain). I (strongly) recommend adding a choline supplement when supplementing with piracetam (and the other *racetams). You burn through choline more quickly when using them and so can end up with mediocre results and sometimes a headache if you neglect a choline supplement. Also give the piracetam a couple of weeks before you expect to feel the full impact. The imminst.org forums have a useful subforum on nootropics that is worth checking out.
0jimmy
Thanks for the info. I was planning on trying it without the choline first to see if it was really needed. Any ideas on how to actually test performance?
6badger
Seth Roberts tracked the influence of omega-3 on brain function via arithmetic tests in R: http://www.blog.sethroberts.net/2009/01/05/tracking-how-well-my-brain-is-working/ http://www.blog.sethroberts.net/2007/04/14/omega-3-and-arithmetic-continued/ It's a little hard to distinguish the benefit from practice and the benefit from omega-3, so ideally you'd alternate periods of supplement and no supplement.
3Desrtopa
Also, ideally you wouldn't know when you were getting omega-3 and when you were getting a placebo during the course of the experiment.
1[anonymous]
If you are going to spend time researching this, I suggest including the agents of short-term cognitive decline (cognitive impairment in jargon). I once scored 103 on an unofficial (but normed) online IQ test after drinking 3 whiskeys the night before, and feeling just a little bit unmotivated. Depression is also known to, uh, depress performance.

Incommensurate thoughts: People with different life-experiences are literally incapable of understanding each other, because they compress information differently.

Analogy: Take some problem domain in which each data point is a 500-dimensional vector. Take a big set of 500D vectors and apply PCA to them to get a new reduced space of 25 dimensions. Store all data in the 25D space, and operate on it in that space.

Two programs exposed to different sets of 500D vectors, which differ in a biased way, will construct different basic vectors during PCA, and so will reduce all vectors in the future into a different 25D space.

In just this way, two people with life experiences that differ in a biased way (due to eg socioeconomic status, country of birth, culture) will construct different underlying compression schemes. You can give them each a text with the same words in it, but the representations that each constructs internally are incommensurate; they exist in different spaces, which introduce different errors. When they reason on their compressed data, they will reach different conclusions, even if they are using the same reasoning algorithms and are executing them flawlessly. Futhermore, it would be very hard for them to discover this, since the compression scheme is unconscious. They would be more likely to believe that the other person is lying, nefarious, or stupid.

5ChrisHibbert
If you're going to write about this, be sure to account for the fact that many people report successful communication in many different ways. People say that they have found their soul-mate, many of us have similar reactions to particular works of literature and art, etc. People often claim that someone else's writing expresses an experience or an emotion in fine detail.
4David_Gerard
FWIW, this is one of the problems postmodernism attempts to address: the bit that's a series of exercises in getting into other people's heads to read a given text.
4Jade
Does it work for understanding non-human peoples?
3Daniel_Burfoot
Yeah. I thought about this a lot in the context of the Hanson/Yudkowsky debate about the unmentionable event. As was frequently pointed out, both parties aspired to rationality and were debating in good faith, with the goal of getting closer to the truth. Their belief was that two rationalists should be able to assign roughly the same probability to the same sequence of events X. That is, if the event X is objectively defined, then the problem of estimating p(X) is an objective one and all rational persons should obtain roughly the same value. The problem is that we don't - maybe can't - estimate probabilities in isolation of other data. All estimates we make are really of conditional probabilities p(X|D), where D is a person's unique huge background dataaset. The background dataset primes our compression/inference system. To use the Solomonoff idea, our brains construct a reasonably short code for D, and then use the same set of modules that were helpful in compressing D to compress X.
1John_Maxwell
No idea what PCA means, but this sounds like a very mathematical way of expressing an idea that is often proposed by left-wingers in other fields.
1conchis
Principal Components Analysis
1MendelSchmiedekamp
I want to write about this too, but almost certainly from a very different angle, dealing with communication and the flow of information. And perhaps at some point I will have the time.

There is a topic I have in mind that could potentially require writing a rather large amount, and I don't want to do that unless there is some interest, rather than suddenly dumping a massive essay on LW without any prior context. The topic is control theory (the engineering discipline, not anything else those words might suggest). Living organisms are, I say (following Bill Powers, who I've mentioned before) built of control systems, and any study of people that does not take that into account is unlikely to progress very far. Among the things I might write about are these:

  • Purposes and intentions are the set-points of control systems. This is not a metaphor or an analogy.

  • Perceptions do not determine actions; instead, actions determine perceptions. (If that seems either unexceptionable or obscure, try substituting "stimulus" for "perception" and "response" for "action".)

  • Control systems do not, in general, work by predicting what action will produce the intended perception. They need not make any predictions at all, nor contain any model of their environment. They require neither utility measures, nor Bayesian or any other form of in

... (read more)
2[anonymous]
I wouldn't dump a huge essay on the site. It seems that this medium has taken on the form of dividing the material into separate posts, and then stringing them together into a sequence. Each post should be whole in itself, but may presume that readers already have the background knowledge contained in previous posts of the sequence. I've thought about writing to try to persuade people here into a form of virtue theory, but before that I would want to write a post attacking anti-naturalist ethics. I would use the same sort of form.
2pjeby
I agree with some of your points -- well, all of them if we're discussing control systems in general -- but a couple of them don't quite apply to brains, as the cortical systems of brains in general (not just in humans) do use predictive models in order to implement both perception and behavior. Humans at least can also run those models forward and backward for planning and behavior generation. The other point, about actions determining perceptions, is "sorta" true of brains, in that eye saccades are a good example of that concept. However, not all perception is like that; frogs for example don't move their eyes, but rely on external object movement for most of their sight. So I think it'd be more accurate to say that where brains and nervous systems are concerned, there's a continuous feedback loop between actions, perceptions, and models. That is, models drive actions, actions generate raw data that's filtered through a model to become a perception, that may update one or more models. Apart from that though, I'd say that your other three points apply to people and animals quite well.
2rhollerith
Heck yeah, I want to see it. I suggest adopting Eliezer's modus operandi of using a lot of words. And every time you see something in your draft post that might need explanation, post on that topic first.
1JulianMorrison
it sounds like you want to write a book! But a post would be much appreciated.
1Richard_Kennaway
There are several books already on the particular take on control theory that I intend to write about, so I'm just thinking in terms of blog posts, and keeping them relevant to the mission of LW. I've just realised I have a shortage of evenings for the rest of this week, so it may take some days before I can take a run at it.
1cousin_it
I'd love to see this as a top-level post. Here's additional material for you: online demos of perceptual control theory, Braitenberg vehicles.
0Richard_Kennaway
I know the PCT site :-) It was Bill Powers' first book that introduced me to PCT. Have you tried the demos on that site yourself?
0cousin_it
Yes, I went through all of them several years ago. Like evolutionary psychology, the approach seems to be mostly correct descriptively, even obvious, but not easy to apply to cause actual changes. (Of course utility function-based approaches are much worse.)
0Vladimir_Nesov
But they should act according to a rigorous decision theory, even though they often don't. It seems to be an elementary enough statement, so I'm not sure what are you asserting.
2cousin_it
"Should" statements cannot be logically derived from factual statements. Population evolution leads to evolutionarily stable strategies, not coherent decision theories.
0Vladimir_Nesov
"Should" statements come from somewhere, somewhere in the world (I'm thinking about that in the context of something close to "The Meaning of Right"). Why do you mention evolution?
2cousin_it
In that post Eliezer just explains in his usual long-winded manner that morality is our brain's morality instinct, not something more basic and deep. So your morality instinct tells you that agents should follow rigorous decision theories? Mine certainly doesn't. I feel much better in a world of quirky/imperfect/biased agents than in a world of strict optimizers. Is there a way to reconcile? (I often write replies to your comments with a mild sense of wonder whether I can ever deconvert you from Eliezer's teachings, back into ordinary common sense. Just so you know.)
0Vladimir_Nesov
To simplify one of the points a little. There are simple axioms that are easy to accept (in some form). Once you grant them, the structure of decision theory follows, forcing some conclusions you intuitively disbelieve. A step further, looking at the reasons the decision theory arrived at those conclusions may persuade you that you indeed should follow them, that you were mistaken before. No hidden agenda figures into this process, as it doesn't require interacting with anyone, this process may theoretically be wholly personal, you against math.
0cousin_it
Yes, an agent with a well-defined utility function "should" act to maximize it with a rigorous decision theory. Well, I'm glad I'm not such an agent. I'm very glad my life isn't governed by a simple numerical parameter like money or number of offspring. Well, there is some such parameter, but its definition includes so many of my neurons as to be unusable in practice. Joy!
0Vladimir_Nesov
No joy in that. We are ignorant and helpless in attempts to find this answer accurately. But we can still try, we can still infer some answers, the cases where our intuitive judgment systematically goes wrong, to make it better!
1ArisKatsaris
What if our mind has embedded in its utility function the desire not to be more accurately aware of it? What if some people don't prefer to be more self-aware than they currently are, or their true preferences indeed lie in the direction of less self-awareness?
4JGWeissman
Then it would be right for instrumental reasons to be as self-aware as we need to be during the crunch time that we are working to produce (or support the production of) a non-sentient optimizer (or at least another sort of mind that doesn't have such self-crippling preferences) which can be aware on our behalf and reduce or limit our own self awareness if that actually turns out to be the right thing to do.
3wedrifid
Careful. Some people get offended if you say things like that. Aversion to publicly admitting that they prefer not to be aware is built in as part of the same preference.
0TheOtherDave
OTOH, if it also comes packaged with an inability to notice public assertions that they prefer not to be aware, then you're safe.
0wedrifid
If only... :P
2Vladimir_Nesov
Then how would you ever know? Rational ignorance is really hard.
0Daniel_Burfoot
I don't necessarily believe you, but I would be happy to read what you write :-) I would also be happy to learn more about control theory. To comment further would require me to touch on unmentionable subjects.

We are Eliza: A whole lot of what we think is reasoned debate is pattern-matching on other people's sentences, without ever parsing them.

I wrote a bit about this in 1998.

But I'm not as enthused about this topic as I was then, because then I believed that parsing a sentence was reasonable. Now I believe that humans don't parse sentences even when reading carefully. The bird the cat the dog chased chased flew. Any linguist today would tell you that's a perfectly fine English sentence. It isn't. And if people don't parse grammatic structures to just 2 levels of recursion, I doubt recursion, and generative grammars, are involved at all.

4pangloss
i believe that linguists would typically claim that it is formed by legitimate rules of English syntax, but point out that there might be processing constraints on humans that eliminate some syntactically well formed sentences from the category of grammatical sentences of English.
2JulianMorrison
Eh, I could read it, with some stack juggling. I can even force myself to parse the "buffalo" sentence ;-P
3William
You can force yourself to parse the sentence but I suspect that the part of your brain that you use to parse it is different from the one you use in normal reading and in fact closer to the part of the brain you use to solve a puzzle.
1JulianMorrison
I puzzle what goes where, but the bit that holds the parse once I've assembled it feels the same as normal.
0randallsquared
The result isn't as important as the process in this case. Even if the result is stored the same way, for the purpose of William's statement it's only necessary that the process is sufficiently different.
0Risto_Saarelma
A bit like described in this Stephen Bond piece?

I'm kind of thinking of doing a series of posts gently spelling out step by step the arguments for Bayesian decision theory. Part of this is for myself: I've read a while back Omohundro's vulnerability argument, but felt there were missing bits that I had to personally fill in, assumptions I had to sit and think on before I could really say "yes, obviously that has to be true". Some things that I think I can generalize a bit or restate a bit, etc.

So as much as for myself, to organize and clear that up, as for others, I want to do a short series of "How not to be stupid (given unbounded computational power)" In which in each each post I focus on one or a small number of related rules/principles of Bayesian Decision theory and epistemic probabilities, and gently derive those from the "don't be stupid" principle. (Again, based on Omohundro's vulnerability arguments and the usual dutch book arguments for Bayesian stuff, but stretched out and filled in with the details that I personally felt the need to work out, that I felt were missing.)

And I want to do it as a series, rather than a single blob post so I can step by step focus on a small chunk of the problem and make it easier to reference related rules and so on.

Would this be of any use to anyone here though? (maybe a good sequence for beginners, to show one reason why Bayes and Decision Theory is the Right Way?) Or would it be more clutter than anything else?

1Eliezer Yudkowsky
It's got my upvote.
0Cyan
I have a similar plan -- however, I don't know when I'll get to my post and I don't think the material I wanted to discuss would overlap greatly with yours.
0Vladimir_Nesov
Can you characterize a bit more concretely what you mean, by zooming in on a tiny part of this planned work? It's no easy task to go from common sense to math, and not shoot your both feet off in the process.
1Psy-Kosh
Basically, I want to reconstruct, slowly, the dutch book and vulnerability arguments, but step by step, with all the bits that confused me filled in. The basic common sense rule that these are built on is "don't accept a situation in which you know you automatically lose" (where "lose" is used to the same level of generality that "win" is in "rationalists win.") One of the reasons I like dutch book/vulnerability arguments is that each step ends up being relatively straightforward as to getting from that principle to the math. (Sometimes an additional concept needs to be introduced, not so much proven as much as defined and made explicit.)
0JulianMorrison
Sounds interesting.
[-]MBlume120

This doesn't even have an ending, but since I'm just emptying out the drafts folder

Memetic Parasitism

I heard a rather infuriating commercial on the radio today. There's no need for me to recount it directly -- we've all heard the type. The narrator spoke of the joy a woman feels in her husband's proposal, of how long she'll remember its particulars, and then, for no apparent reason, transitioned from this to a discussion of shiny rocks, and where we might think of purchasing them.

I hardly think I need to belabor the point, but there is no natural connection between shiny rocks and promises of monogamy. There was not even any particularly strong empirical connection between the two until about a hundred years ago, when some men who made their fortunes selling shiny rocks decided to program us to believe there was.

What we see here is what I shall call memetic parasitism. We carry certain ideas, certain concepts, certain memes to which we attach high emotional valence. In this case, that meme is romantic love, expressed through monogamy. An external agent contrives to derive some benefit by attaching itself to that meme.

Now, it is important to note when describing a Dark pattern t... (read more)

4Nanani
A Series of Defense Against the Dark Arts would not be unwelcome, especially for those who haven't gone through the OB backlog. Voting up.
0[anonymous]
Anti-advertising campaigners have tried. The trouble is that their advocacy was immediately parasitized by shiny-rock sellers of the political sort, and people tend to reject or accept both messages at once.

Buddhism.

What it gets wrong. Supernatural stuff - rebirth, karma in the magic sense, prayer. Thinking Buddha's cosmology was ever meant as anything more than an illustrative fable. Renunciation. Equating positive and negative emotions with grasping. Equating the mind with the chatty mind.

What it gets right. Meditation. Karma as consequences. There is no self, consciousness is a brain subsystem, emphasis on the "sub" (Cf. Drescher's "Cartesian Camcorder" and psychology's "system two"). The chatty mind is full of crap and a huge waste of time, unless used correctly. Correct usage includes noticing mostly-subconscious thought loops (Cf. cognitive behavioral therapy). A lot of everyday unreason does stem from grasping, which roughly equates to "magical thinking" or the idea that non-acknowledgment of reality can change it. This includes various vices and dark emotions, including the ones that screw up attempted rationality.

What rationalists should do. Meditate. Notice themselves thinking. Recognize grasping as a mechanism. Look for useful stuff in Buddhism.

Why I can't post. Not enough of an expert. Not able to meditate myself yet.

It actually strikes me that a series of posts on "What can we usefully learn from X tradition" would be interesting. Most persistent cultural institutions have at least some kind of social or psychological benefit, and while we've considered some (cf. the martial arts metaphors, earlier posts on community building, &c.) there are probably others that could be mined for ideas as well.

1Drahflow
I'd be similarly interested in covering philosophical Daoism, the path to wisdom I follow, and believe to be mostly correct. Things they get wrong: Some of them believe in rebirth, too much reverence for "ancient masters" without good reevaluation, some believe in weird miracles. Things they get right: Meditation, purely causal view of the world, free will as local illusion, relaxed attitude to pretty much everything (-> less bias from social influence and fear of humiliation), the insight that akrasia is overcome best not by willpower but by adjusting yourself to feel that what you need to do is right, apparently ways to actually help you (at least me) with that, a decent way accept death as something natural.
4gwern
I kept waiting for 'alchemy' and immortality to show up in your list! I recently read through an anthology of Taoist texts, and essentially every single thing postdating the Lieh Tzu or the Huai-nan Tzu (-200s) was absolute rubbish, but the preceding texts were great. I've always found this abrupt disintegration very odd.
3David_Gerard
Know what alchemy's good for? Art and its production. Terrible chemistry, great for creation of art. Know what's actually a good text for this angle on alchemy? Promethea by Alan Moore, in which he sets out his entire system. (Not only educational, but a fantastic book that is at least as good as his famous '80s stuff.)
4[anonymous]
Respectfully disagree. I found Promethea to be poorly executed. There was a decent idea somewhere in there, but I think he was too distracted by the magic system to find it. One exception -- the aside about how the Christian and Muslim Prometheas fought during the Crusades. That was nicely done.
0David_Gerard
Yeah, the plot suffers bits falling off the end. Not the sides, thankfully. I think it's at least as coherent as Miracleman, and nevertheless remains an excellent exposition of alchemy and art.
0JulianMorrison
Daoism flunks badly on nature-worship.
0blogospheroid
Not enough of an expert on buddhism, but I live its mother religion - hinduism. There are enough similarities for me to comment on a few of your comments. Rebirth - The question of which part of your self you choose to identify with is a persistent thing in OB/LW. When X and Y conflict and you choose to align yourself with X instead of Y, WHO OR WHAT has made that decision? One might say, the consensus in the mind or more modern answers. The point is that there are desires and impulses which stem from different levels of personality within you. There are animal impulses, basic human impulses(evo-psych), societal drives. There are many levels to you. The persistent question in almost all the dharma religions is - what do you choose to identify with? Even in rebirth, the memories of past lives are erased and the impulses that drove you the greatest at your time of death decide where in the next life you would be. If you are essentially still hungering for stuff, the soul would be sent to stations where that hunger can be satiated. if you are essentially at peace, having lived a full life, you will go to levels that are subtler and presumably more abstract. You become more soul and less body, in a crude sense. Vedanta does believe in souls. I'm holding out for a consistent theory of everything of physics before i drop my beliefs about that one.
1JulianMorrison
Would you understand one?
2blogospheroid
I would try very hard to understand a theory that has been proclaimed by the majority of scientists as a true TOE. In particular, I would try to understand if there is a possibility of transmission of information that is similar to the transmigration of the soul. If there is no such comfort in the new theory, I assume I will spend a very difficult month and then get back on my feet with a materialist's viewpoint.

Aumann agreements are pure fiction; they have no real-world applications. The main problem isn't that no one is a pure Bayesian. There are 3 bigger problems:

  • The Bayesians have to divide the world up into symbols in exactly the same way. Since humans (and any intelligent entity that isn't a lookup table) compress information based on their experience, this can't be contemplated until the day when we derive more of our mind's sensory experience from others than from ourselves.
  • Bayesian inference is slow; pure Bayesians would likely be outcompeted by groups that used faster, less-precise reasoning methods, which are not guaranteed to reach agreement. It is unlikely that this limitation can ever be overcome.
  • In the name of efficiency, different reasoners would be highly orthogonal, having different knowledge, different knowledge compression schemes and concepts, etc.; reducing the chances of reaching agreement. (In other words: If two reasoners always agree, you can eliminate one of them.)

This would probably have to wait until May.

0conchis
"Pure fiction" and "no real world application" seem overly strong. Unless you are talking about individuals actually reaching complete agreement, in which case the point is surely true, but relatively trivial. The interesting question (real world application) is surely how much more we should align our beliefs at the margin. Also, whether there are any decent quality signals we can use to increase others' perceptions that we are Bayesian, which would then enable us to use each others' information more effectively.

I think there's a post somewhere in the following observation, but I'm at a loss as to what lesson to take away from it, or how to present it:

Wherever I work I rapidly gain a reputation for being both a joker and highly intelligent. It seems that I typically act in such a way that when I say something stupid, my co-workers classify it as a joke, and when I say something deep, they classify it as a sign of my intelligence. As best I can figure, its because at one company I was strongly encouraged to think 'outside the box' and one good technique I found for... (read more)

Willpower building as a fundamental art. And some of the less obvious pit falls. Including the dangers of akrasia circumvention techniques which simply shunt willpower from one place to another and overstraining damaging your willpower reserves.

I need to hunt back down some of the cognitive science research on this before I feel comfortable posting it.

[-]pjeby120

...the dangers of akrasia circumvention techniques which simply shunt willpower from one place to another and overstraining damaging your willpower reserves.

Easy answer: don't use willpower. Ever.

I quit it cold turkey in late 2007, and can count on one hand the number of times I've been tempted to use it since.

(Edit to add: I quit it in order to force myself to learn to understand the things that blocked me, and to learn more effective ways to accomplish things than by pushing through resistance. It worked.)

don't use willpower. Ever.

Could you do a post on that?

5PhilGoetz
Consider cognitive behavioral therapy. You don't get someone to change their behavior by telling them to try really hard. You get them to convince themselves that they will get what they want if they change their behavior. People do what they want to do. We've gone over this in the dieting threads.
0[anonymous]
Yes, please!
5MrShaggy
My idea that I'm not ready to post is now: find a way to force pjeby to write regular posts.
2MendelSchmiedekamp
By all means do post. Clarification would be welcome, since we're almost certainly not using the term willpower in the same way.
5pjeby
I'm using it to mean relying on conscious choice in the moment, to overcome preference reversal. Forcing yourself to do something that, at that moment, you'd prefer not to, or to not do something, that you'd prefer to. What I do instead, is find out why my preference has changed, and either: 1. Remove that factor from the equation, either by changing something in my head, or in the outside world, or 2. Choose to agree with my changed preference, for the moment. (Not all preference reversals are problematic, after all!)
1MendelSchmiedekamp
From that usage your claim makes much more sense. Willpower in my usage is more general, when impulses are overridden or circumvented. In your example, it includes the conspicuous consumption of which you describe, but also more subtle costs like the cognitive computation of determining the "why" and forestalling the impulse to remove internal or external factors. My main point is that willpower is a limited resource that ebbs and flows during cognitive computation, often due to changing costs. But it can be trained up, conserved, and refreshed effectively, if certain hazards can be avoided.
0pjeby
I don't see how that's any different from what I said. How is an "impulse" different from a preference reversal? (i.e., if it's not a preference reversal, why would you need to override or circumvent it?)
1Paul Crowley
I repeat my usual plea at this point: please read Breakdown of Will before posting on this.
3pjeby
That book doesn't actually contain any solutions to anything, AFAICT. The two useful things I've gotten from it that enhanced my existing models were: 1. The idea of conditioned appetites, and 2. The idea that "reward" and "pleasure" are distinct. There were other things that I learned, of course, like his provocative reward-interval hypothesis that unifies the mechanism of things like addiction, compulsion, itches and pain on a single, time-based scale. But that's only really interesting in an intellectual-curiosity sort of way at the moment; I haven't figured out anything one can DO with it, that I couldn't already do before. Even the two useful things I mentioned, are mostly useful in explaining why certain things happen, and why certain of my techniques work on certain things. They don't really give me anything that can be turned into actual improvements on the state of the art, although they do suggest some directions for stretching what I apply some things to. Anyway, if you're already familiar with the basic ideas of discounting and preference reversal, you're not going to get a lot from this book in practical terms. OTOH, if you think it'd be cool to know how and why your bargains with yourself fail, you might find it interesting reading. But I'm already quite familiar with how that works on a practical level, and the theory really adds nothing to my existing practical advice of, "don't do that!" (Really, the closest the book comes to giving any practical advice is to vaguely suggest that maybe willpower and intertemporal bargaining aren't such good ideas. Well, not being a scientist, I can state it plainly: they're terrible ideas. You want coherent volition across time, not continuous conflict and bargaining.)
0MendelSchmiedekamp
I'll take a closer look at it.
0matt
relevant: http://scienceblogs.com/cognitivedaily/2008/03/practicing_selfcontrol_consume.php

Some bad ideas on the theme "living to win":

  • Murder is okay. There are consequences, but it's a valid move nonetheless.
  • Was is fun. In fact, it's some of the best fun you can have as long as you don't get disabled or killed permanently.
  • Being a cult leader is a winning move.
  • Learn and practice the so called dark arts!
1PhilGoetz
"War", I think you mean.

What would a distinctively rationalist style of government look like? Cf. Dune's Bene Gesserit government by jury, what if a quorum of rationalists reaching Aumann Agreement could make a binding decision?

What mechanisms could be put in place to stop politics being a mind-killer?

Why not posted: undeveloped idea, and I don't know the math.

1XFrequentist
This is a year late, but it's simply not ok that Futarchy not be mentioned here. So there you are.
1blogospheroid
Mencius Moldbug believes that if we were living in a world of many mini sovereign corporations who compete for citizens, then they would be forced to be rational. They will try to seek every way to keep paying customers (taxpayers). Another dune idea could be relevant over here - The god emperor. Have a really long lived guy be king. He cannot take the short cuts that many others do and has to think properly on how to govern. Addendum - I understand that this is a system builder's perspective, and not an entrepreneur's perspective, i.e. a meta answer rather than an answer, sorry for that.
0JulianMorrison
That sounds like an evolution-style search, and he ought to be more careful, evolution only optimizes for the utility function - in this case, the ability to trap and hold "customers". I would categorize that among the pre-rational systems of government - alongside representative democracy, kings, constitutions, etc. A set of rules or a single decider to do the thinking for a species who can't think straight on their own. I was more interested in what a rationalist government would be like.

I'm vaguely considering doing a post about skeptics. It seems to me they might embody a species of pseudo-rationality, like Objectivists and Spock. (Though it occurs to me that if we define "S-rationality" as "being free from the belief distortions caused by emotion", then "S-rationality" is both worthwhile and something that Spock genuinely possesses.) If their supposed critical thinking skills allow them to disbelieve in some bad ideas like ghosts, Gods, homeopathy, UFOs, and Bigfoot, but also in some good ideas like cryonic... (read more)

1Annoyance
"but also in some good ideas like cryonics and not in other bad ideas like extraterrestrial contact, ecological footprints, p-values, and quantum collapse," Your listing of 'bad' and 'good' ideas reveals more about your personal beliefs than any supposed failings of skeptics.
1steven0461
OK, so can you name any idea that you think is bad, is accepted/fashionable in science-oriented circles, but is rejected by skeptics for the right reasons?
0Annoyance
Whether I think some idea is bad is completely irrelevant. What matters is whether I can show that there are compelling rational reasons to conclude that it's bad. There are lots of claims that I suspect may be true but that I cannot confirm or disprove. I don't complain about skeptics not disregarding the lack of rational support for those claims, nor do I suggest that the nature of skepticism be altered so that my personal sacred cows are spared.
0steven0461
Do you believe, then, that there are no ideas that are accepted/fashionable in science-oriented circles, yet that have rational support against them? I wouldn't have listed the ideas that I listed if I didn't think I could rationally refute them as being true, coherent, or useful. If it's not the case that 1) such ideas exist and 2) skeptics disagree with them, then what's the point of all their critical thinking? Why not just copy other people's opinions and call it a day? Is skepticism merely about truth-advocating and not truth-seeking?

Yet another post from me about theism?

This time, pushing for a more clearly articulated position. Yes, I realize that I am not endearing myself by continuing this line of debate. However, I have good reasons for pursuing.

  • I really like LW and the idea of a place where objective, unbiased truth is The Way. Since I idealistically believe in Aumann’s Agreement theorem, I think that we are only a small number of debates away from agreement.

  • To the extent to which LW aligns itself with a particular point of view, it must be able to defend that view. I don’t w

... (read more)
[-]MBlume190

(Um, this started as a reply to your comment but quickly became its own "idea I'm not ready to post" on deconversions and how we could accomplish them quickly.)

Upvoted. It took me months of reading to finally decide I was wrong. If we could put that "aha" moment in one document... well, we could do a lot of good.

Deconversions are tricky though. Did anyone here ever read Kissing Hank's Ass? It's a scathing moral indictment of mainline Christianity. I read it when I was 15 and couldn't sleep for most of a night.

And the next day, I pretty much decided to ignore it. I deconverted seven years later.

I believe the truth matters, and I believe you do a person a favor by deconverting them. But if you've been in for a while, if you've grown dependent on, for example, believing in an eternal life... there's a lot of pain in deconversion, and your mind's going to work hard to avoid it. We need to be prepared for that.

If I were to distill the reason I became an atheist into a few words, it would look something like:

Ontologically fundamental mental things don't make sense, but the human mind is wired to expect them. Fish swim in a sea of water, humans swim in a sea of minds... (read more)

6David_Gerard
These two sentences, particularly the second, just explained for me why humans expect minds to be ontologically fundamental. Thank you!
0shokwave
Thank you for bringing this post to my attention! I'm going to use those lines.
6PhilGoetz
You're right; yet no one ever sees it this way. Before Darwin, no one said, "This idea that an intelligent creator existed first doesn't simplify things." Here is something I think would be useful: A careful information-theoretic explanation of why God must be complicated. When you explain, to Christians, that it doesn't make sense to say complexity originated because God created it and God must be complicated, Christians reply (and I'm generalizing here because I've heard these replies so many times) one of 2 things: * God is outside of space and time, so causality doesn't apply. (I don't know how to respond to this.) * God is not complicated. God is simple. God is the pure essence of being, the First Cause. Think of a perfect circle. That's what God is like. It shouldn't be hard to explain that, if God knows at least what is in the Encyclopedia Brittanica, God has at least enough complexity to store that information. Of course, putting this explanation on LW might do no good to anybody.
1Nick_Tarleton
Keep in mind that if this complexity was derived from looking at external phenomena, or at the output of some simple computation, it doesn't reduce the prior probability.
1jimmy
Except that the library of all possible books includes the Encyclopedia Brittanica but is far simpler.
0SoullessAutomaton
Presumably, God can also distinguish between "the set of books with useful information" and "the set of books containing only nonsense". That is quite complex indeed.
1jimmy
I'm afraid I wasn't clear. I am not arguing that "god" is simple or that it explains anything. I'm just saying that god's knowledge is compressible into an intelligent generator (AI). The source code isn't likely to be 10 lines, but then again, it doesn't have to include the Encyclopedia of Brittanica to tell you everything that the encyclopedia can once it grows up and learns. F=m*a is enough to let you draw out all physically possible trajectories from the set of all trajectories, and it is still rather simple.
0pangloss
You say: You're right; yet no one ever sees it this way. Before Darwin, no one said, "This idea that an intelligent creator existed first doesn't simplify things." I may have to look up where before Darwin it gets argued, but I am pretty sure people challenged that before Darwin.
3John_Maxwell
It might be why you're an atheist, but do you think it would have swayed your christian self much? I highly doubt that your post would come near to deconverting anyone. Many religious people believe that souls are essential for creativity and intelligence, and they won't accept the "you're wired to see intelligence" argument if they disbelieve in evolution (not uncommon.) To deconvert people to atheism quickly, I think you need a sledgehammer. I still haven't found a really good one. Here are some areas that might be promising: 1. Ask them why God won't drop a grapefruit from the sky to show he exists. "He loves me more than I can imagine, right? And more than anything he wants me to know him right? And he's all powerful, right?" To their response: "Why does God consider blindly believing in him in the absence of evidence virtuous? Isn't that sort of think a made-up religion would say about their god to keep people faithful?" 2. The Problem of Evil: why do innocent babies suffer and die from disease? 3. I've heard there are lots of contradictions in the bible. Maybe someone who is really dedicated could find some that are really compelling. Personally, I'm not interested enough in this topic to spend time reading religious texts, but more power to those who are. A few moderately promising ones: Why does God heal cancer patients but not amputees? Why do different religious denominations disagree, when they could just ask God for the answer? Why would a benevolent God send people who happened to be unlucky enough not to hear about him to enternal damnation?
5Alexandros
I think a very straightforward contradiction is here: http://skepticsannotatedbible.com/contra/horsemen.html 2 Samuel and 1 Chronicles are supposed to be parallels, telling the same story. Yet one of them probably lost or gained a zero along the way. Many christians that see this are foreced to retreat to a more 'soft' interpretation of the bible that allows for errors in transactiption etc. It's the closest to a quick 'n' dirty sledgehammer I have ever had. And a folow-up: Why hasn't this been discussed in your church? Surely, a group of truthseekers wouldn't shy away from such fundamental criticisms, even to diffuse them.
5orthonormal
Problem is, theists of reasonable intelligence spend a good deal of time honing and rehearsing their replies to these. They might be slightly uneasy with their replies, but if the alternative is letting go of all they hold dear, then they'll hold to their guns. Catching them off guard is a slightly better tactic. Or, to put it another way: if there were such a sledgehammer lying around, Richard Dawkins (or some other New Atheist) would be using it right now. Dawkins uses all the points you listed, and more; and the majority of people don't budge.
4MBlume
Well...it did sway my Christian self. My Christian self generated those arguments and they, with help from Eliezer's writings against self-deception, annihilated that self.
2orthonormal
That's as good of an exposition of this point as any I've seen. It deserves to be cleaned up and posted visibly, here on LW or somewhere else.
0MBlume
thanks =)
0Jack
So 1. (x) : x is a possible entity. the more complicated x is the less likely it is to exist controlling for other evidence. 2. (x): x is a possible entity. the more intelligent x the more complicated x is, controlling for other properties. 3. God is maximally intelligent. :. God's existence is maximally unlikely unless there is other evidence or unless it has other properties that make its existence maximally more likely. (Assume intelligent to refer to the possession of general intelligence) I think most theists will consent to (1), especially given that its implicit in some of their favorite arguments. (3) They consent to, unless they mean "God" as merely a cosmological constant, or first cause. In which case we're having a completely different debate. So the issue is (2). I'm sure some of the cognitive science types can give evidence for why intelligence is necessarily complicated. There is however, definitive evidence for the correlation of intelligence and complexity. Human brains are vastly more complex than the brains of other animals. Computers get more complicated the more information they hold, etc. It might actually be worth making the distinction, between intelligence and the holding of data. It is a lot easier to see how the more information something contains the more complicated something is since one can just compare two sets of data, one bigger than the other, and see that one is more complicated. Presumably, God needs to contain information on everyone's behavior, the the events that happen at any point in time, prayer requests etc. Btw, is there a way for me to us symbolic logic notation in xml?
0MBlume
hmm...if we can get embedded images to work, we're set. http://www.codecogs.com/png.latex?\int_a^b\frac{1}{\sqrt{x}}dx Click that link, and you'll get a rendered png of the LaTeX expression I've placed after the ?. Replace that expression with another and, well, you'll get that too. If you're writing a top-level post, you can use this to pretty quickly embed equations. Not sure how to make it useful in a comment though.
4Vladimir_Nesov
Here it is: Source code: (It was mentioned before.)
0MBlume
awesome =)
-1byrnema
I think you are looking at this from an evolutionary point of view? Then it makes sense to make statements like "more and more complex states are less likely" (i.e., they take more time) and "intelligence increases with the complexity" (of organisms). Outside this context, though, I have trouble understanding what is meant by "complicated" or why "more intelligent" should be more complex. In fact, you could skip right from (1) to (3) -- most theists would be comfortable asserting that God is maximally complex. However, is response to (1) they might counter with -- if something does exist, you can't use its improbability to negate that it exists.
0Jack
1. I'm not sure most theists would be comfortable asserting that God is maximally complex. 2. The wikipedia article Complexity looks helpful. 3. It is true that if something does exist you can't use its improbability to negate its existence. But this option is allowed for in the argument; "unless there is other evidence or it has other properties that make its existence maximally more likely". So if God is, say, necessary, then he is going to exist no matter his likelihood. What this argument does is set a really low prior for the probability that God exists. There is never going to be one argument that proves atheism because no argument is going to rule out the existence of evidence the other way. The best we can do is give a really low initial probability and wait to hear arguments that swing us the other way AND show that some conceptions of God are contradictory or impossible. Edit- You're right though, if you mean that there is a problem with the phrasing "maximally unlikely" if there is still a chance for its existence. Certainly "maximally unlikely" cannot mean "0".
6Nanani
Something about this phrase bothers me. I think you may be confused as to what is meant by The Way. It isn't about any specific truth, much less Truth. It is about rationality, ways to get at the truth and update when it turns out that truth was incomplete, or facts change, and so on. Promoting an abstract truth is very much -not- the point. I think it will help your confusion if you can wrap your head around this. My apologies if these words don't help.
5Paul Crowley
I would prefer us not to talk about theism all that much. We should be testing ourselves against harder problems.
4MBlume
Theism is the first, and oldest problem. We have freed ourselves from it, yes, but that does not mean we have solved it. There are still churches. If we really intend to make more rationalists, theism will be the first hurdle, and there will be an art to clearing that hurdle quickly, cleanly, and with a minimum of pain for the deconverted. I see no reason not to spend time honing that art.
7Paul Crowley
First, the subject is discussed to death. Second, our target audience at this stage is almost entirely atheists; you start on the people who are closest. Insofar as there are theists we could draw in, we will probably deconvert them more effectively by raising the sanity waterline and having them drown religion without our explicit guidance on the subject; this will also do more to improve their rationality skills than explicit deconversion.
2MBlume
sigh You're probably right. I have a lot of theists in my family and in my social circle, and part of me still wants to view them as potential future rationalists.
9Vladimir_Nesov
We should teach healthy habits of thought, not fight religion explicitly. People should be able to feel horrified by the insanity of supernatural beliefs for themselves, not argued into considering them inferior to the alternatives.
5JulianMorrison
When you don't have a science, the first step is to look for patterns. How about assembling an archive of de-conversions that worked?
2Eliezer Yudkowsky
The problem with current techniques is that nothing works reliably. If you can go so high as to have a document that works to deconvert 10% of educated theists, then you can start examining for regularities in what worked and didn't work. The trouble is reaching that high initial bar.
3David_Gerard
The first place that springs to mind to look is deconversion-oriented documents that theists warn each other off and which they are given prepared opinions on. The God Delusion is my favourite current example - if you ever hear a theist dissing it, ask if they've read it; it's likely they won't have, and will (hopefully) be embarrassed by having been caught cutting'n'pasting someone else's opinions. What others are there that have produced this effect?
2Alicorn
People are more willing than you might think to openly deride books they admit that they have never read. I know this because I write Twilight fanfiction.
6wedrifid
Almost as if their are other means than just personal experience by which to collect evidence. "Standing on the shoulders of giants hurling insults at Stephenie Meyer's."
1Zetetic
I am very curious about your take on those who attack Twilight for being anti-feminist, specifically for encouraging young girls to engage in male-dependency fantasies. I've heard tons of this sort of criticism from men and women alike, and since you appear to be the de facto voice of feminism on Lesswrong, I would very much appreciate any insight you might be able to give. Are these accusations simply overblown nonsense in your view? If you have already addressed this, would you be kind enough to post a link?
9Alicorn
I really don't want to be the voice of feminism anywhere. However, I'm willing to be the voice of Twilight apologism, so: Bella is presented as an accident-prone, self-sacrificing human, frequently putting herself in legitimately dangerous situations for poorly thought out reasons. If you read into the dynamics of vampire pairing-off, which I think is sufficiently obvious that I poured it wholesale into my fic, this is sufficient for Edward to go a little nuts. Gender needn't enter into it. He's a vampire, nigh-indestructible, and he's irrevocably in love with someone extremely fragile who will not stop putting herself in myriad situations that he evaluates as dangerous. He should just turn her, of course, but he has his own issues with considering that a form of death, which aren't addressed head-on in the canon at all; he only turns her when the alternative is immediate death rather than slow gentle death by aging. So instead of course he resorts to being a moderately controlling "rescuer" - of course he does things like disable her car so she can't go visiting wolves over his warnings. Wolves are dangerous enough to threaten vampires, and Edward lives in a world where violence is a first or at least a second resort to everything. Bella's life is more valuable to him than it is to her, and she shows it. It's a miracle he didn't go spare to the point of locking her in a basement, given that he refused to make her a vampire. (Am I saying Bella should have meekly accepted that he wanted to manage her life? No, I'm saying she should have gotten over her romantic notion that Edward needed to turn her himself and gotten it over with. After she's a vampire in canon, she's no longer dependent - emotionally attached, definitely, and they're keeping an eye on her to make sure she doesn't eat anybody, but she's no longer liable to be killed in a car accident or anything and there's no further attempt ever to restrict her movement. She winds up being a pivotal figure in the
6HonoreDB
I haven't read Twilight, and I don't criticize books I haven't read, but I do object in general to the idea that something can't be ideologically offensive just because it's justified in-story. Birth of a Nation, for example, depicts the founding of the Ku Klux Klan as a heroic response to a bestial, aggressive black militia that's been terrorizing the countryside. In the presence of a bestial, aggressive black militia, forming the KKK isn't really a racist thing to do. But the movie is still racist as all hell for contriving a situation where forming the KKK makes sense. Similarly, I'd view a thriller about an evil international conspiracy of Jewish bankers with profound suspicion.
1Alicorn
I think it's relevant here that vampires are not real.
5HonoreDB
Well, sure, but men who think women need to stay in the kitchen for their own good are. What makes Twilight sound bad is that it's recreating something that actually happens, and something that plenty of people think should happen more, in a context where it makes more sense.
3Alicorn
There are other female characters in the story. Alice can see enough to dance circles around the average opponent. Rosalie runs around doing things. Esme's kind of ineffectual, but then, her husband isn't made out to be great shakes in a fight either. Victoria spends two books as the main antagonist. Jane is scary as hell. And - I repeat - the minute Bella is not fragile, there is no more of the objectionable attitude.
3HonoreDB
That doesn't necessarily mean that the Edward/Bella dynamic wasn't written to appeal to patriarchal tendencies, and just arose naturally from the plot. I'm completely unequipped to argue about whether or not this was the case. But I'm pretty confident the reason people who haven't read the book think it sounds anti-feminist is that we assume that Stephenie Meyer started with the Edward-Bella relationship and built the characters and the world around it.
4Zetetic
Alicorn, First of all, thanks for taking the time to give an in-depth response. I personally have misgivings similar to those expressed by HonoreDB, insofar as it seems that although the fantastical elements of the story do 'justify' the situation in a sense, they appear to be designed to do so. I felt that these sort of plot devices were essentially a post hoc excuse to perpetuate a sort of knight-in-shining-armor dynamic in order to tantalize a somewhat young and ideologically vulnerable audience in the interest of turning a quick buck. Then again, I may be being somewhat oversensitive, or I may be letting my external biases (I personally don't care for the young adult fantasy genre) cloud my judgment.
3shokwave
I don't credit Stephenie Meyer with enough intelligence to have figured out this line of reasoning. I think it's most likely that Meyer created situations so that Edward could save Bella, and due to either lack of imagination or inability to notice, the preponderance of dangerous situations (and especially dangerous people) ended up very high - high enough to give smarter people ideas like violence is just more common in that world. That said, my views on Twilight are extremely biased by my social group.
3Alicorn
My idea that violence is common in the Twilight world is not primarily fueled by danger to Bella in particular. I was mostly thinking of, say, Bree's death, or the stories about newborn armies and how they're controlled, or the fact that the overwhelming majority of vampires commit murder on a regular basis.
2AstroCJ
I have a friend currently researching this precise topic; she adores reading Twilight and simultaneously thinks that it is completely damaging for young women to be reading. The distinction she drew, as far as I understood it, was that (1) Twilight is a very, very alluring fantasy - one day an immortal, beautiful man falls permanently in love with you for the rest of time and (2) canon!Edward is terrifying when considered not through the lens of Bella. Things like him watching her sleep before they'd spoken properly; he's not someone you want to hold up as a good candidate for romance. (I personally have not read it, though I've read Alicorn's fanfic and been told a reasonable amount of detail by friends.)
0David_Gerard
Yes, but catching them out can be fun :-)
3pjeby
It seems to me that Derren Brown once did some sort of demonstration in which he mass-converted some atheists to theists, and/or vice versa. Perhaps we should investigate what he did. ;-)
4Paul Crowley
* Textual description of what Brown did * Video discussion (Updated following Vladimir_Nesov's comment - thanks!)
0Vladimir_Nesov
Even where it's obvious, you should add textual description for the links you give. This is the same courtesy as not saying just "Voted up", but adding at least some new content in the same note.
0Paul Crowley
Fixed, thanks!
1JulianMorrison
You sound real sure of that. Since it's you saying it, you probably have data. Can you link it so I can see?
3Paul Crowley
If something worked that reliably, wouldn't we know about it? Wouldn't it, for example, be seen many times in one of these lists of deconversion stories?
4JulianMorrison
That only rules out the most surface-obvious of patterns. And I doubt anyone has tried deconverting someone in an MRI machine. It's too early to give up.
2Paul Crowley
No-one's giving up, but until we find such a way we have to proceed in its absence.
3gjm
They are potential future rationalists. They're even (something like) potential present rationalists; that is, someone can be a pretty good rationalist in most contexts while remaining a theist. This is precisely because the internal forces discouraging them from changing can be so strong.
2cabalamat
Indeed. When a community contains more than a critical number of theists, their irrational decision making can harm themselves and the whole community. By deconverting theists, we help them and everyone else. I'd like to see a discussion on the best ways to deconvert theists.
1CronoDAS
Capture bonding seems to be an effective method of changing beliefs.
0saturn
Here's the open-and-shut case against theism: People often tell stories to make themselves feel better. Many of these stories tell of various invisible and undetectable entities. Theory 1 is that all such stories are fabrications; Theory 2 is that an arbitrary one is true and the rest are fabrications. Theory 2 contains more burdensome detail but doesn't predict the data better than Theory 1. Although to theists this isn't a very convincing argument, it is a knock-down argument if you're a Bayesian wannabe with sane priors.
0spriteless
Y'all are misunderstanding theists main reason for belief when you attack it's likelihood. They don't think God sounds likely, but that it's better to assume God exists so you can at least pretend one's happiness is justified; god gives hope and hopelessness is the enemy. That's the argument you'd need to undermine to deconvert people. I'm not articulate to do that, so I link someone who writes for a living instead. http://gretachristina.typepad.com/greta_christinas_weblog/2008/11/a-safe-place-to-land.html
0byrnema
Right, it would be easier to deconvert if you give some hope about the other side. An analogous idea at LW is leaving a line of retreat. Note: for editing (italics, etc), there's a Help button on the lower right hand corner of the comment box.
0spriteless
Thank you. Eliezer's an interesting read, but I prefer to link to rationalists outside this community when possible... enough people have already read his work that I'd want to get in some new ideas, and because we need more girls.

A criticism of practices on LW that are attractive now but which will hinder "the way" to truth in the future; that lead to a religious idolatry of ideas (a common fate of many "in-groups") rather than objective detachment. For example,

(1) linking to ideas in original posts without summarizing the main ideas in your own words and how they apply to the specific context -- as this creates short-cuts in the brain of the reader, if not in the writer

(2) Use of analogies without formally defining the ideas behind them leads to content not o... (read more)

2pangloss
I am not sure I agree with your second concern. Sometimes premature formalization can take us further off track than leaving things with intuitively accessible handles for thinking about them. Formalizing things, at its best, helps reveal the hidden assumptions we didn't know we were making, but at its worst, it hard-codes some simplifying assumptions into the way we start talking and thinking about the topic at hand. For instance, as soon as we start to formalize sentences of the form "If P, then Q" as material implication, we adopt an analysis of conditionals that straightjackets them into the role of an extensional (truth-functional) semantics. It is not uncommon for someone who just took introductory logic train themselves into forcing natural language into this mold, rather than evaluating the adequacy of the formalism for explaining natural language.
0PhilGoetz
I plan to keep doing this; it saves time. Isn't this inherent in using analogies? Are you really saying "Don't use analogies"?
1byrnema
I like analogies. I think they are useful in introducing or explaining an idea, but shouldn't be used as a substitute for the idea.

Winning Interpersonally

cousin_it would like to know how rationality has actually helped us win. However, in his article, he completely gives up on rationality in one major area, admitting that "interpersonal relationships are out."

Alex strenuously disagrees, asking "why are interpersonal relationships out? I think rationality can help a great deal here."

(And, of course, I suppose everone knows my little sob-story by now.)

I'd like to get a read from the community on this question.

Is rationality useless -- or worse, a liability when deal... (read more)

6pjeby
Only if you translate this into meaning you've got to communicate like Spock, or talk constantly about things that bore, depress, or shock people, or require them to think when they want to relax. etc. (That article, btw, is by a guy who figured out how to stop being so "rational" in his personal relationships. Also, as it's a pickup artist's blog, there may be some images or language that may be offensive or NSFW. YMMV.)
3SoullessAutomaton
That article seems kind of dodgy to me. Do people really fail to realize that the behaviors he describes are annoying and will alienate people? The article also gets on my nerves a bit because it assumes that learning to be socially appealing to idiots is 1) difficult and 2) rewarding. Probably I'm just not in his target demographic, so oh well.
-1pjeby
Well, he did, and I did, so that's a sample right there. Sounds like you missed the part of the article where he pointed out that thinking of those people as "idiots" is snobbery on your part. The value of a human being's life isn't really defined by the complexity of the ideas that get discussed in it.
0SoullessAutomaton
No, but the value to me of interacting with them is. I would like nothing more than to know that they live happy and fulfilling lives that do not involve me. Also, "snobbery" is a loaded term. Is there a reason I am obligated to enjoy the company of people I do not like?
4pjeby
Sounds like you also missed the part about acquiring an appreciation for the more experiential qualities of life, and for more varieties of people. More so than "idiots"? ;-) Only if you want to increase your opportunities for enjoyment in life, be successful at endeavors that involve other people, reduce the amount of frustration you experience at family gatherings... you know, generally enjoying yourself without needing to have your brain uploaded first. ;-)
2SoullessAutomaton
I do have an appreciation for those things. I find them enjoyable, distracting, but ultimately unsatisfying. That's like telling someone who eats a healthy diet to acquire an appreciation for candy. Haha, I wondered if you would call me on that. You are right, of course, and for the most part my attitude towards people isn't as negative as I made it sound. I was annoyed by the smug and presumptuous tone of that article. I do fine enjoying myself as it is, and it's not like I can't work with people--I'm talking only about socializing or other leisure-time activities. And as far as that goes, I absolutely fail to see the benefit of socializing with at least 90% of the people out there. They don't enjoy the things I enjoy and that's fine; why am I somehow deficient for failing to enjoy their activities? Like I said, I don't think I'm really in the target demographic for that article, and I'm not really sure what you're trying to convince me of, here.
3pjeby
I'm not trying to convince you of anything. You asked questions. I answered them. Hm, so who's trying to convince who now? ;-) Interesting. I found its tone to be informative, helpful, and compassionately encouraging. Who said you were? Not even the article says that. The author wrote, in effect, that he realized that he was being a snob and missing out on things by insisting on making everything be about ideas and rightness and sharing his knowledge, instead of just enjoying the moments, and by judging people with less raw intelligence as being beneath him. I don't see where he said anybody was being deficient in anything. My only point was that sometimes socializing is useful for winning -- even if it's just enjoying yourself at times when things aren't going your way. I personally found that it limited my life too much to have to have a negative response to purely- or primarily- social interactions with low informational or practical content. Now I have the choice of being able to enjoy them for what they are, which means I have more freedom and enjoyment in my life. But notice that at no time or place did I use the word "deficiency" to describe myself or anyone else in that. Unfulfilled potential does not equal deficiency unless you judge it to be such. And if you don't judge or fear it to be such, why would the article set you off? If you were really happy with things as they are, wouldn't you'd have just said, "oh, something I don't need", and went on with your life? Why so much protest?
1SoullessAutomaton
This was the impression I got from the article's tone, as well as your previous comments--an impression of "you should do this for your own good". If that was not the intent, I apologize, it is easy to misread tone over the internet. Because there have been other times where people expressed opinions about what I ought to be doing for enjoyment (cf. the kind of helpfulness described as optimizing others ) and I find it irritating. It's a minor but persistent pet peeve. I remarked on the article originally mainly because the advice it offered seemed puzzlingly obvious.
1pjeby
Ah. All I said in the original context was that rationality is only an obstacle in social situations if you used it as an excuse to make everything about you and your ideas/priorities/values, and gave the article as some background on the ways that "rational" people sometimes do that. No advice was given or implied. As for the article's tone, bear in mind that it's a pickup artist's blog (or more precisely, the blog of a trainer of pickup artists). So, his audience is people who already want to improve their social skills, and therefore have already decided it's a worthy goal to do so. That's why the article doesn't attempt to make a case for why someone would want to improve their social skills -- it is, after all a major topic of the blog.
0SoullessAutomaton
Yes, this is what I meant when I said I probably wasn't in the target demographic--my social skills are acceptable, but my desire to socialize is fairly low. Anyway, sorry for the pointless argument, heh.
5[anonymous]
del
4anonymouslyanonymous
"We commonly speak of the sex 'drive', as if it, like hunger, must be satisfied, or a person will die. Yet there is no evidence that celibacy is in any way damaging to one's health, and it is clear that many celibates lead long, happy lives. Celibacy should be recognised as a valid alternative sexual lifestyle, although probably not everyone is suited to it." -J. S. Hyde, Understanding Human Sexuality, 1986 Source.
8MBlume
I have been in a happy, mutually satisfying romantic/sexual relationship once in my life. We had one good year together, and it was The. Best. Year. Of. My. Life. I know people say that when something good happens to you, you soon adjust, and you wind up as happy or as sad as you were before, but that was simply not my experience. I'd give just about anything to have that again. Such is my utility function, and I do not intend to tamper with it.
9anonymouslyanonymous
People differ. All I'm trying to say is this: telling someone something is a necessary precondition for their leading a meaningful life, when that is not the case, is likely to create needless suffering.
1MBlume
indeed
6MTGandP
This is really remarkable to read six years later, since, although I don't know you personally, I know your reputation as That Guy Who Has Really Awesome Idyllic Relationships.
4PhilGoetz
I've read several times that that feelings lasts 2-3 years for most people. That's the conventional wisdom. I've read once that, for some people, it lasts their whole life long. (I mean, once in a scholarly book. I've read it many times in novels.)
0MBlume
I rather suspect I might be one of those people. It's been over three years since I first fell for her, and over nine months since those feelings were in any way encouraged, and I still feel that attachment today. If it turns out I am wired to stay in love for the long term, that'd certainly be a boon under the right circumstances. Rather sucks now though.
0Jack
Don't know if it applies to you. But I imagine a very relevant factor is whether or not you get attached to anyone else.
0[anonymous]
People differ. All I'm saying is this: telling people something is absolutely necessary for them to have a meaningful life, when that thing is not absolutely necessary for them to have a meaningful life, is likely to produce needless suffering.
2A1987dM
Er...
-2Shmi
That's involuntary celibacy, not a lifestyle choice.
2A1987dM
I guess the male LessWrongers that MBlume was thinking about in the ancestor comment haven't chosen that.
0Shmi
Right, but that's not what the quote you replied to was about.
3MendelSchmiedekamp
I have much I could say on the subject of interpersonal application of rationality (especially to romantic relationships), much of it positive and promising. Unfortunately I don't know yet how well it will match up with rationality as its taught in the OB/LW style - which will decide how easy that is for me to unpack here.
2MBlume
Well, this thread might be a good place to start =) ETA: I don't think anything should ever be said against an idea which is shown to work. If its epistemic basis is dodgy, we can make a project of shoring it up, but the fact that it works means there's something supporting it, even if we don't yet fully understand it.
0MendelSchmiedekamp
What I do need to do, is to think more clearly (for which now is not the best time) on whether or not the OB/LW flavor of rationality training is something which can communicate that methods I've worked out. Then it's a matter of trade-offs between forcing the OB/LW flavor or trying to use a related, but better fitting flavor. Which means computing estimates on culture, implicit social biases and expectations. All of which takes time and experiments, much of which I expect to fail. Which I suppose exemplifies the very basics of what I've found works - individual techniques can be dangerous because when over-generalized there are simply new biases to replace old ones. Instead, forget what you think you know and start re-building your understanding from observation and experiment. Periodically re-question the conclusions you make, and build your understanding from bite size pieces to larger and larger ones. Which has everything to do with maintaining rational relationships with non-rational, and even deeply irrational people, especially romantic ones. But this takes real work, because each relationship is its own skill, its own "technique", and you need to learn it on the fly. On the plus side, if you get good at it you'll be able to learn how to deal with complex adaptive systems quickly - sort of a meta-skill, as it were.
2Alicorn
There are people who will put up with a relentlessly and honestly rationalist approach to one's friendship or other relationship with them. However, they are rare and precious, and I use the words "put up with" instead of "enjoy and respond in kind" because they do it out of affection, and (possibly, in limited situations) admiration that does not inspire imitation. Not because they are themselves rationalists, reacting rationally to the approach, but because they just want to be friends enough to deal.
1cousin_it
To expand on my phrase "interpersonal relationships are out"... Talking to people, especially the opposite sex, strongly exercises many subconscious mechanisms of our brain. Language, intonation, emotion, posture, you just can't process everything rationally as it comes at you in parallel at high bandwidth. Try dancing from first principles; you'll fail. If you possess no natural talent for it, you have no hope of winning an individual encounter through rationality. You can win by preparation - slowly develop such personal qualities as confidence, empathy and sense of humor. I have chosen this path, it works.
0[anonymous]
deleted
0cousin_it
If by rational you mean successful, then yes. If you mean derived from logic, then no. I derived it from intuition.
0[anonymous]
deleted
0Nanani
Rationality helping in relationships (here used to mean all interpersonal, not just romance) : * Use "outside view" to figure out how your interactions look to others; not only to the person you are talking to but also to the social web around you. * Focus on the goals, yours and theirs. If these do not match, the relationship is doomed in the long run, romantic or not. * Obviously, the whole list of cognitive biases and how to counter them. When you -know- you are doing something stupid, catching yourself rationalizing it and what not, you learn not to do that stupid thing.
0SoullessAutomaton
The answers to this are going to depend strongly on how comfortable we are with deception when dealing with irrational individuals.
0[anonymous]
deleted

I'd be interested in reading (but not writing) a post about rationalist relationships, specifically the interplay of manipulation, honesty and respect.

Seems more like a group chat than a post, but let's see what you all think.

1jscn
I've found the work of Stefan Molyneux to be very insightful with regards to this (his other work has also been pretty influential for me). You can find his books for free here. I haven't actually read his book on this specific topic ("Real-Time Relationships: The Logic of Love") since I was following his podcasting and forums pretty closely while he was working up to writing it.
0Lawliet
Do you think you could summarise it for everybody in a post?
1jscn
I'm not confident I could do a good job of it. He proposes that most problems in relationships come from our mythologies about ourselves and others. In order to have good relationships, we have to be able to be honest about what's actually going on underneath those mythologies. Obviously this involves work on ourselves, and we should help our partner to do the same (not by trying to change them, but by assisting them in discovering what is actually going on for them). He calls his approach to this kind of communication the "Real-Time Relationship." To quote from the book: "The Real-Time Relationship (RTR) is based on two core principles, designed to liberate both you and others in your communication with each other: 1. Thoughts precede emotions. 2. Honesty requires that we communicate our thoughts and feelings, not our conclusions." For a shorter read on relationships, you might like to try his "On Truth: The Tyranny of Illusion". Be forewarned that, even if you disagree, you may find either book an uncomfortable read.
0Alicorn
This sounds very interesting, but I don't think I'm qualified to write it either.

(rationlism:winning)::(science:results)

We've argued over whether rationalism should be defined as that which wins. I think this is isomorphic to the question whether science should be defined as that which gets good results.

I'd like to look at the history of science in the 16th-18th centuries, to see whether such a definition would have been a help or a hindrance. My priors say that it would have been a hindrance, because it wouldn't have kicked contenders out of the field rapidly.

Under position 1, "science = good results", you would have compe... (read more)

The ideal title for my future post would be this:

How I Confronted Akrasia and Won.

It would be an account of my dealing with akrasia, which so far resulted in eliminating two decade-long addictions and finally being able to act according to my current best judgment. I also hope to describe a practical result of using these techniques (I specified a target in advance and I'm currently working towards it.)

Not posted because:

  1. The techniques are not yet tested even on myself. They worked perfectly for about a couple of months, but I wasn't under any severe str

... (read more)
6orthonormal
That is (perhaps) unintentionally hilarious, BTW.

Regarding all the articles we've had about the effectiveness of reason:

Learning about different systems of ethics may be useless. It takes a lot of time to learn all the forms of utilitarianism and their problems, and all the different ethical theories. And all that people do is look until they find one that lets them do what they wanted to do all along.

IF you're designing an AI, then it would be a good thing to do. Or if you've already achieved professional and financial success, and got your personal life in order (whether that's having a wife, having... (read more)

0conchis
Also potentially useful if you're involved in any way in policy formation. (And yes, even when there are political constraints). In practice, I find the most useful aspects of having a working knowledge of lots of different ethical systems is that it makes it easier to: (a) quickly drill down to the core of many disagreements. Even if they're not resolvable, being able to find them quickly often saves a lot of pointless going around in circles. (There are network externalities involved here as well. Knowing this stuff is more valuable when other people know it too.) (b) quickly notice (or suspect) when apparently sensible goal sets are incompatible (though this is perhaps more to do with knowing various impossibility theorems than knowing different ethical systems).

Any interest in a top-level post about rationality/poker inter-applications?

0Alicorn
I would be interested if I knew how to play poker. Does your idea generalize to other card games (my favorite is cassino, I'd love to figure out how to interchange cassino strategies with rationality techniques), or is something poker-specific key to what you have to say?
0steven0461
Mostly I think it doesn't. Some of it may generalize to games in general.
0AllanCrossman
Yes.

The Implications of Saunt Lora's Assertion for Rationalists.

For those who are unfamiliar, Saunt Lora's Assertion comes from the novel Anathem, and expresses the view that there are no genuinely new ideas; every idea has already been thought of.

A lot of purportedly new ideas can be seen as, at best, a slightly new spin on an old idea. The parallels between, Leibniz's views on the nature of possibility and Arnauld's objection, and David Lewis's views on the nature of possibility and Kripke's objection are but one striking example. If there is anything to... (read more)

3David_Gerard
It would first require a usable definition of "genuinely new" not susceptible to goalpost-shifting and that is actually useful for anything.
3[anonymous]
That was part of the joke in Anathem. Saunt Lora's assertion had actually first been stated by Saunt X, but it also occurs in the pre-X writings of Saunt Y, and so on...

Scott Peck, author of "The Road Less Travelled", which was extremely popular ten years ago, theorised that people became more mature, and could get stuck on a lower level of maturity. From memory, the stages were:

  1. Selfish, unprincipled
  2. Rule- following
  3. Rational
  4. Mystical.

Christians could be either rule-following, a stage of maturity most people could leave behind in their teens, needing a big friendly policeman in the sky to tell them what to do- or Mystical.

Mystical people had a better understanding of the World because they did not expect it ... (read more)

0Nanani
I suspect most, if not all, regulars will dismiss these stages as soon as reading convinces them that the words "rational" and "mystical" are being used in the right sense. That is, few here would be impressed by "enjoying the mystery of nature". However it might be useful for beginners who haven't read through the relevant sequences. Voted up.
0pjeby
I don't think that "enjoying the mystery of nature" is an apt description of that last stage. My impression is more that it's about appreciating the things that can't be said; i.e., of the "he who speaks doesn't know, and he who knows doesn't speak" variety. There are some levels of wisdom that can't be translated verbally without sounding like useless tautologies or proverbs, so if you insist on verbal rationality as the only worthwhile knowledge, then such things will remain outside your worldview. So in a sense, it's "mystical", but without being acausal, irrational, or supernatural.

I have this idea in my mind that my value function differs significantly from that of Elizier. In particular I cannot agree to blowing up Huygens in that Baby-Eater Scenario presented.

To summarize shortly: He gives a scenario which includes the following problem:

Some species A in the universe has as a core value the creation of unspeakable pain in their newborn. Some species B has as core value removal of all pain from the universe. And there is humanity.

In particular there are (besides others) two possible actions: (1): Enable B to kill off all of A, with... (read more)

1SoullessAutomaton
I seem to recall that there was no genocide involved; B intended to alter A such that they would no longer inflict pain on their children. The options were: 1. B modifies both A and humanity to eliminate pain; also modifies all three races to include parts of what the other races value. 2. Central star is destroyed, the crew dies; all three species continue as before. 3. Human-colonized star is destroyed; lots of humans die, humans remain as before otherwise; B is assumed to modify A as planned above to eliminate pain.
0PhilGoetz
Does Eliezer's position depend on the fact that group A is using resources that could otherwise be used by group B, or by humans? Group B's "eliminate pain" morality itself has mind-bogglingly awful consequences if you think it through.

Lurkers and Involvement.

I've been thinking that one might want to make a post, or post a survey, that attempts to determine how much folks engage with the contents on less wrong.

I'm going to assume that there are far more lurkers than commenters, and far more commenters than posters, but I'm curious as to how many minutes, per day, folks spend on this site.

For myself, I'd estimate no more than 10 or 15 minutes but it might be much less than that. I generally only read the posts from the RSS feed, and only bother to check the comments on one in 5. Even then... (read more)

Putting together a rationalist toolset. Including all the methods one needs to know, but also, and very much so the real world knowledge that helps to get along or ahead in life. Doesnt have to be reinvented, but pointed out and evaluated.

In short: I expect members of the rationality movement to dress well when its needed. To be in reasonable shape. To /not/smoke. Know about positive psychology. Know about how to deal with people. And find ways to be a rational & happy person.

2thomblake
I disagree with much of this. Not sure what 'reasonable shape' means, but I'm not above ignoring physical fitness in the pursuit of more lofty goals. Same with smoking - while I'll grant that there are more efficient ways to get the benefits of smoking, for an established smoker-turned-rationalist it might not be worth the time and effort to quit. And I'm also not sure what you mean by 'positive psychology'.
2MartinB
Just some examples. It might be that smoking is not as bad as its currently presented. Optimizing lifestyle for higher chances of survival seams reasonable to me, but might not be everyones choice. Want i do not find usefull in any instance are grumpy rationalists that scorn on the whole world. Do you agree with the importance of 'knowledge about the real world'? Regarding positive psychology. Look um Daniel Gilbert and Martin Seligman. Both gave nice talks on Ted.com and have something to say about happiness.

There has been some calling for applications of rationality; how can this help me win? This combined with the popularity and discussion surrounding "Stuck in the middle with Bruce" gave me an idea for a potential series of posts relating to LWers pastimes of choice. I have a feeling most people here have a pastime, and if rationalists should win there should be some way to map the game to rational choices.

Perhaps articles discussing "how rational play can help you win at x" and "how x can help you think more rationally" would ... (read more)

[-]pre10

memetic engineering

The art of manipulating the media, especially news, and public opinion. Sometimes known as "spin-doctoring" I guess, but I think the memetic paradigm is probably a more useful one to attack it from.

I'd love to understand that better than I do. Understanding it properly would certainly help with evangelism.

I fear that very few people really do grok it though, certainly I wouldn't be capable of writing much relevant about it yet.

1Emile
I'm not sure that's something worth studying here - it's kinda sneaky and unethical.
7pre
Oh, so we're just using techniques which win without being sneaky? Isn't 'sneaky' a good, winning strategy? Rationality's enemies are certainly using these techniques. Should we not study them, if only with a view to finding an antidote?
5Simulacra
I would say it is certainly something worth studying, the understanding of how it works would be invaluable. We can decide if we want to use it to further our goals or not once we understand it (hopefully not before, using something you don't understand is generally a bad thing imho). If we decide not to use it, the knowledge would help us educate others and perhaps prevent the 'dark ones' from using it. Perhaps something a la James Randi, create an ad whose first half uses some of the techniques and whose second half explains the mechanisms used to control inattentive viewers with a link to somewhere with more information on understanding how its done and why people should care.

I have more to say about my cool ethics course on weird forms of utilitarianism, but unlike with Two-Tier Rationalism, I'm uncertain of how germane the rest of these forms are to rationalism.

I have a lot to say about the Reflection Principle but I'm still in the process of hammering out my ideas regarding why it is terrible and no one should endorse it.

2SoullessAutomaton
I'm not sure what Reflection Principle you're referring to here. Google suggests two different mathematical principles but I'm not seeing how either of those would be relevant on LW, so perhaps you mean something else?
2Alicorn
The Reflection Principle, held by some epistemologists to be a constraint on rationality, holds that if you learn that you will believe some proposition P in the future, you should believe P now. There is complicated math about what you should do if you have degree of credence X in the proposition that you will have credence Y in proposition P in the future and how that should affect your current probability for P, but that's the basic idea. An alternate formulation is that you should treat your future self as a general expert.
2SoullessAutomaton
Reminds me a bit of the LW (ab)use of Aumann's Agreement Theorem, heh--at least with a future self you've got a high likelihood of shared priors. Anyway, I know arguments from practicality are typically missing the point in philosophical arguments, but this seems to be especially useless--even granting the principle, under what circumstance could you become aware of your future beliefs with sufficient confidence to change your current beliefs based on such? It seems to boil down mostly to "If you're pretty sure you're going to change your mind, get it over with". Am I missing something here?
1Alicorn
Well, that's one of my many issues with the principle - it's practically useless, except in situations that it has to be formulated specifically to avoid. For instance, if you plan to get drunk, you might know that you'll consider yourself a safe driver while you are (in the future) drunk, but that doesn't mean you should now consider your future, drunk self a safe driver. Sophisticated statements of Reflection explicitly avoid situations like this.
0JulianMorrison
Well that's pretty silly. You wouldn't treat your present self as a general expert.
2Alicorn
Wouldn't you? You believe everything you believe. If you didn't consider yourself a general expert, why wouldn't you just follow around somebody clever and agree with them whenever they asserted something? And even then, you'd be trusting your expertise on who was clever.
0gwern
"It's raining outside but I don't believe that it is."

I started an article on the psychology of rationalization, but stopped due to a mixture of time constrains and not finding many high quality studies.

[-][anonymous]00

It seem to me possible to create a safe oracle AI. Suppose that you have a sequence predictor which is a good approximation of Solomonoff induction but which run in reasonable time. This sequence predictor can potentially be really useful (for example, predict future siai publications from past siai publications then proceed to read the article which give a complete account of Friendliness theory...) and is not dangerous in itself. The question, of course, is how to obtain such a thing.

The trick rely on the concept of program predictor. A program predictor... (read more)

The Verbal Overshadowing effect, and how to train yourself to be a good explicit reasoner.

Contents of my Drafts folder:

  • A previous version of my Silver Chair post, with more handwringing about why one might not stop someone from committing suicide.
  • A post about my personal motto ("per rationem, an nequequam", or "through reason, or not at all"), and how Eliezer's infamous Newcomb-Box post did and didn't change my perspective on what rationality means.
  • A listing of my core beliefs related to my own mind, beliefs/desires/etc., with a request for opinions or criticism.
  • A post on why animals in particular and any being not capa
... (read more)

Great thread idea.

Frequentist Pitfalls:

Bayesianism vs Frequentism is one thing, but there are a lot of frequentist-inspired misinterpretations of the language of hypothesis testing that all statistically competent people agree are wrong. For example, note that: p-values are not posteriors (interpreting them this way usually overstates the evidence against the null, see also Lindley's paradox), p-values are not likelihoods, confidence doesn't mean confidence, likelihood doesn't mean likelihood, statistical significance is a property of test results not hypo... (read more)

0Vladimir_Nesov
More generally, semantics of the posteriors, and of probability in general, comes from the semantics of the rest of the model, of prior/state space/variables/etc. It's incorrect to attribute any kind of inherent semantics to a model, which as you note happens quite often, when frequentist semantics suddenly "emerges" in probabilistic models. It is a kind of mind projection fallacy, where the role of the territory is played by math of the mind.
2steven0461
To return to something we discussed in the IRC meetup: there's a simple argument why commonly-known rationalists with common priors cannot offer each other deals in a zero-sum game. The strategy "offer the deal iff you have evidence of at least strength X saying the deal benefits you" is defeated by all strategies of the form "accept the deal iff you have evidence of at least strength Y > X saying the deal benefits you", so never offering and never accepting if offered should be the only equilibrium. This is completely off-topic unless anyone thinks it would make an interesting top-level post. ETA: oops, sorry, this of course assumes independent evidence; I think it can probably be fixed?
[-][anonymous]00

What an argument for atheism would look like.

Yesterday, I posted a list for reasons for why I think it would be a good idea to articulate a position on atheism.

I think many people are interested in theory in developing some sort of deconversion programme (for example, see this one), and perhaps creating a library of arguments and counter-arguments for debates with theists.

While I have no negative opinion of these projects, my ambition is much more modest. In a cogent argument for atheism,there would be no need to debate particular arguments. It would be ... (read more)

Thank you for this post -- I feel a bit lighter somehow having all those drafts out in the open.

2byrnema
I also think this post is a great idea. I've written 3 posts that were, objectively, not that appropriate here. Perhaps I should have waited until I knew more about what was going on at LW, but I'm one of those students that has to ask a lot of questions at first, and I'm not sure how long it would have taken me to learn the things that I wanted to know otherwise. Along these lines, what do you guys think of encouraging new members (say, with Karma < 100) to always mini-post here first? [In Second Life, there was a 'sandbox area' where you could practice building objects.] Here on LW, it would be (and is, now that it's here) immensely useful to try out your topic and gauge what the interest would be on LW. Personally, I would have been happy to post my posts (all negative scoring) somewhere out of the main throughfare, as I was just fishing for information and trying to get a feel for the group rather than wanting to make top-level statements.
2MBlume
I definitely think this is a post that should stay visible, whether because we start stickying a few posts or because somebody reposts this monthly. I don't know whether we need guidelines about when people should post here, and definitely don't think we need a karma cutoff. I think just knowing it's here should be enough.

Let's empty out my draft folder then....

Counterfactual Mugging v. Subjective Probability

A couple weeks ago, Vladimir Nesov stirred up the biggest hornet's nest I've ever seen on LW by introducing us to the Counterfactual Mugging scenario.

If you didn't read it the first time, please do -- I don't plan to attempt to summarize. Further, if you don't think you would give Omega the $100 in that situation, I'm afraid this article will mean next to nothing to you.

So, those still reading, you would give Omega the $100. You would do so because if someone told you... (read more)

7Paul Crowley
This is probably mean of me, but I'd prefer if the next article about Omega's various goings-on set out to explain why I should care about what the rational thing to do in Omega-ish situations.
0Vladimir_Nesov
It's a little too strong, I think you shouldn't give away the $100, because you are just not reflectively consistent. It's not you who could've ran the expected utility calculation to determine that you should give it away. If you persist, by the time you must do the action it's not in your interest anymore, it's a lost cause. And that is a subject of another post that has been lying in draft form for some time. If you are strong enough to be reflectively consistent, then ... You use your prior for probabilistic valuation, structured to capture expected subsequent evidence on possible branches. According to evidence and possible decisions on each branch, you calculate expected utility of all of the possible branches, find a global feasible maximum, and perform a component decision from it that fits the real branch. The information you have doesn't directly help in determining the global solution, it only shows which of the possible branches you are on, and thus which role should you play in the global decision, that mostly applies to the counterfactual branches. This works if the prior/utility is something inside you, worse if you have to mine information from the real branch for it in the process. Or, for more generality, you can consider yourself cooperating with your counterfactual counterparts. The crux of the problem is that you care about counterfactuals; once you attain this, the rest is business as usual. When you are not being reflectively consistent, you let the counterfactual goodness slip away from your fingers, turning to myopically optimizing only what's real.

I have an idea I need to build up about simplicity, how to build your mind and beliefs up incrementally, layer by layer, how perfection is achieved not when there's nothing left to add, but nothing left to remove, how simple minded people are sometimes being the ones to declare simple, true ideas others lost sight of, people who're too clever and sophisticate, whose knowledge is like a card house, or a bag of knots, genius, learning, growing up, creativity correlated with age, zen. But I really need to do a lot more searching about that before I can put something together.

Edit : and if I post that here, that's because if someone else wants to dig that idea, and work on it with me, that'd be with pleasure.

0Paul Crowley
Do you understand Solomonoff's Universal Prior?
0infotropism
Not the mathematical proof. But the idea that if you don't yet have data bound to observation, then you decide the probability of a prior by looking at its complexity. Complexity, defined as looking up the smallest compressed bitstring program for each possible turing machines (and that is the reason why it's intractable unless you have infinite computational ressources yes ?), that can be said to generate this prior as the output of being run on that machine. The longest the bitstring, the less likely the prior (and this has to do with the idea you can make more permutations on larger bit strings, like, a one bit string can be in two states, a two bit one can be in 2 states, a 3 bit one in 2 exp 3 states, and so on.). Then you somehow average the probabilities for all pairs of (turing machine + program) into one overall probability ? (I'd love to understand that formally)
0PhilGoetz
I'm skeptical of the concept as presented here. Anything with the phrase "how perfection is achieved" sets up a strong prior in my mind saying it is completely off-base. More generally, in evolution and ecosystems I see that simplicity is good temporarily, as long as you retain the ability to experiment with complexity. Bacteria rapidly simplify themselves to adapt to current conditions, but they also experiment a lot and rapidly acquire complexity when environmental conditions change. When conditions stabilize, they then gradually throw off the acquired complexity until they reach another temporary simple state.
2JulianMorrison
The Occam ideal is "simplest fully explanatory theory". The reality is that there never has been one. They're either broken in "the sixth decimal place", like Newtonian physics, or they're missing bits, like quantum gravity, or they're short of evidence, like string theory.
0[anonymous]
The Occam ideal is "simplest fully explanatory theory". The reality is: sometimes you don't have a fully explanatory theory at all, only a broken mostly-explanatory theory. Sometimes the data isn't good enough to support any theory. Sometimes you have a theory that's obviously overcomplicated but no idea how to simplify it. And sometimes you have a bunch of theories, no easy way to test them, and it's not obvious which is simplest.
0infotropism
So maybe, to rephrase the idea then, we want to strive, to achieve something as close as we can to perfection; optimality ? If we do, we may then start laying the bases, as well as collecting practical advices, general methods, on how to do that. Though not a step by step absolute guide to perfection, rather, the first draft of one idea that would be helpful in aiming towards optimality. edit : also, that's a st Exupery quote, that illustrates the idea, I wouldn't mean it that literally, not as more than a general guideline.
[-][anonymous]-30

"Telling more than we can know" Nisbett & Wilson

I saw this on Overcoming Bias a while ago thanks to Pete Carlton www.lps.uci.edu/~johnsonk/philpsych/readings/nisbett.pdf

I hope you all read this. What are the implications? Can you tell me a story ;)

Here is my story and I am sticking to it!

I have a sailing friend that makes up physics from whole cloth it is frightening and amusing to watch an almost impossible to correct without some real drama. It seems one only has to be close in horse shoes and hand grenades and life.