Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The ideas you're not ready to post

24 Post author: JulianMorrison 19 April 2009 09:23PM

I've often had half-finished LW post ideas and crossed them off for a number of reasons, mostly they were too rough or undeveloped and I didn't feel expert enough. Other people might worry their post would be judged harshly, or feel overwhelmed, or worried about topicality, or they just want some community input before adding it.

So: this is a special sort of open thread. Please post your unfinished ideas and sketches for LW posts here as comments, if you would like constructive critique, assistance and checking from people with more expertise, etc. Just pile them in without worrying too much. Ideas can be as short as a single sentence or as long as a finished post. Both subject and presentation are on topic in replies. Bad ideas should be mined for whatever good can be found in them. Good ideas should be poked with challenges to make them stronger. No being nasty!

Comments (253)

Comment author: Daniel_Burfoot 21 April 2009 06:59:09AM *  26 points [-]

The Dilbert Challenge: you are working in a company in the world of Dilbert. Your pointy-haired boss comes to you with the following demand:

"One year from today, our most important customer will deliver us a request for a high-quality reliable software system. Your job and the fate of the company depends on being able to develop and deploy that software system within two weeks of receipt of the specifications. Unfortunately we don't currently know any of the requirements. Get started now."

I submit that this preposterous demand is really a deep intellectual challenge, the basic form of which arises in many different endeavors. For example, it's reasonable to believe that at some point in the future, humanity will face an existential threat. Given that we will not know the exact nature of that threat until it's almost upon us, how can we prepare for it today?

Comment author: cousin_it 21 April 2009 01:02:12PM *  6 points [-]

Wow. I'm a relatively long-time participant, but never really "got" the reasons why we need something like rationality until I read your comment. Here's thanks and an upvote.

Comment author: thomblake 21 April 2009 03:24:31PM *  2 points [-]

That's one of the stated objectives of computer ethics (my philosophical sub-field) - to determine, in general, how to solve problems that nobody's thought of yet. I'm not sure how well we're doing at that so far.

Comment author: [deleted] 21 April 2009 10:54:25AM *  2 points [-]

deleted

Comment author: MBlume 20 April 2009 03:11:59AM *  24 points [-]

On the Care and Feeding of Rationalist Hardware

Many words have been spent here in improving rationalist software -- training patterns of thought which will help us to achieve truth, and reliably reach our goals.

Assuming we can still remember so far back, Eliezer once wrote:

But if you have a brain, with cortical and subcortical areas in the appropriate places, you might be able to learn to use it properly. If you're a fast learner, you might learn faster - but the art of rationality isn't about that; it's about training brain machinery we all have in common

Rationality does not require big impressive brains any more than the martial arts require big bulging muscles. Nonetheless, I think it would be rare indeed to see a master of the martial arts willfully neglecting the care of his body. Martial artists of the wisest schools strive to improve their bodies. They jog, or lift weights. They probably do not smoke, or eat unhealthily. They take care of their hardware so that the things they do will be as easy as possible.

So, what hacks exist which enable us to improve and secure the condition of our mental hardware? Some important areas that come to mind are:

  • sleep
  • diet
  • practice
Comment author: Vladimir_Golovin 20 April 2009 09:23:54AM 5 points [-]

I'd definitely want to read about a good brain-improving diet (I have no problems with weight, so I'd prefer not to mix these two issues).

Comment author: AngryParsley 20 April 2009 09:13:15AM 4 points [-]

I agree. LW doesn't have many posts about maintaining and improving the brain.

I would also add aerobic exercise to your list, and possibly drugs. For example, caffeine or modafinil can help improve concentration and motivation. Unfortunately they're habit-forming and have various health effects, so it's not a simple decision.

Comment author: randallsquared 20 April 2009 09:36:54PM 3 points [-]

I've only had modafinil once (but it was amazing in the concentration-boosting department), but I have a lot of experience with caffeine, and the effects are primarily mood-affecting, for me. Large amounts of caffeine destroy concentration, offsetting any improvements, and, like other drugs, the effect grows weaker the longer you take it. On the plus side, caffeine is only weakly addicting, so you can just stop every now and then to reset things, which I do every few months.

Comment author: Drahflow 20 April 2009 08:53:51AM 2 points [-]

While we are at it: * caffeine * meditation * music * mood * social interaction

Also, which hacks are available to better interface our mental hardware with the real world: * information presentation * automated information filtering

Comment author: jimmy 21 April 2009 06:18:16AM 1 point [-]

Piracetam and other "nootropics" are worth checking out.

Piracetam supposedly helps with memory and cognition by increasing blood flow to the brain or something... I got some to play around with and will let you guys know if anything interesting happens.

Comment deleted 21 April 2009 06:28:39AM [-]
Comment author: jimmy 21 April 2009 04:56:29PM 0 points [-]

Thanks for the info.

I was planning on trying it without the choline first to see if it was really needed.

Any ideas on how to actually test performance?

Comment author: badger 21 April 2009 07:57:06PM 4 points [-]

Seth Roberts tracked the influence of omega-3 on brain function via arithmetic tests in R:

http://www.blog.sethroberts.net/2009/01/05/tracking-how-well-my-brain-is-working/ http://www.blog.sethroberts.net/2007/04/14/omega-3-and-arithmetic-continued/

It's a little hard to distinguish the benefit from practice and the benefit from omega-3, so ideally you'd alternate periods of supplement and no supplement.

Comment author: Desrtopa 13 April 2011 03:27:21PM 2 points [-]

Also, ideally you wouldn't know when you were getting omega-3 and when you were getting a placebo during the course of the experiment.

Comment author: blogospheroid 21 April 2009 06:00:07AM 1 point [-]

Increasing the level of fruit in my diet helped me maintain a positive mood for longer. I tried it when i was in alone for a while in a foreign country, so i'm not sure if it was a placebo affect.

Comment author: RichardKennaway 22 April 2009 11:01:52AM *  13 points [-]

There is a topic I have in mind that could potentially require writing a rather large amount, and I don't want to do that unless there is some interest, rather than suddenly dumping a massive essay on LW without any prior context. The topic is control theory (the engineering discipline, not anything else those words might suggest). Living organisms are, I say (following Bill Powers, who I've mentioned before) built of control systems, and any study of people that does not take that into account is unlikely to progress very far. Among the things I might write about are these:

  • Purposes and intentions are the set-points of control systems. This is not a metaphor or an analogy.

  • Perceptions do not determine actions; instead, actions determine perceptions. (If that seems either unexceptionable or obscure, try substituting "stimulus" for "perception" and "response" for "action".)

  • Control systems do not, in general, work by predicting what action will produce the intended perception. They need not make any predictions at all, nor contain any model of their environment. They require neither utility measures, nor Bayesian or any other form of inference. There are methods of designing control systems that use these concepts but they are not inherent to the nature of control.

  • Inner conflict is, literally, a conflict between control systems that are trying to hold the same variable in two different states.

  • How control systems behave is not intuitively obvious, until one has studied control systems.

This is the only approach to the study of human nature I have encountered that does not appear to me to mistake what it looks like from the inside for the underlying mechanism.

What say you all? Vote this up or down if you want, but comments will be more useful to me.

Comment author: rhollerith 22 April 2009 12:40:21PM 2 points [-]

Heck yeah, I want to see it. I suggest adopting Eliezer's modus operandi of using a lot of words. And every time you see something in your draft post that might need explanation, post on that topic first.

Comment author: pjeby 22 April 2009 02:46:44PM 1 point [-]

I agree with some of your points -- well, all of them if we're discussing control systems in general -- but a couple of them don't quite apply to brains, as the cortical systems of brains in general (not just in humans) do use predictive models in order to implement both perception and behavior. Humans at least can also run those models forward and backward for planning and behavior generation.

The other point, about actions determining perceptions, is "sorta" true of brains, in that eye saccades are a good example of that concept. However, not all perception is like that; frogs for example don't move their eyes, but rely on external object movement for most of their sight.

So I think it'd be more accurate to say that where brains and nervous systems are concerned, there's a continuous feedback loop between actions, perceptions, and models. That is, models drive actions, actions generate raw data that's filtered through a model to become a perception, that may update one or more models.

Apart from that though, I'd say that your other three points apply to people and animals quite well.

Comment author: cousin_it 22 April 2009 11:09:26AM *  1 point [-]

I'd love to see this as a top-level post. Here's additional material for you: online demos of perceptual control theory, Braitenberg vehicles.

Comment author: RichardKennaway 22 April 2009 09:47:08PM *  0 points [-]

I know the PCT site :-) It was Bill Powers' first book that introduced me to PCT. Have you tried the demos on that site yourself?

Comment author: cousin_it 23 April 2009 09:41:55AM *  0 points [-]

Yes, I went through all of them several years ago. Like evolutionary psychology, the approach seems to be mostly correct descriptively, even obvious, but not easy to apply to cause actual changes. (Of course utility function-based approaches are much worse.)

Comment author: Vladimir_Nesov 22 April 2009 02:41:59PM 0 points [-]

Control systems do not, in general, work by predicting what action will produce the intended perception. They need not make any predictions at all, nor contain any model of their environment. They require neither utility measures, nor Bayesian or any other form of inference. There are methods of designing control systems that use these concepts but they are not inherent to the nature of control.

But they should act according to a rigorous decision theory, even though they often don't. It seems to be an elementary enough statement, so I'm not sure what are you asserting.

Comment author: cousin_it 23 April 2009 09:47:21AM *  1 point [-]

"Should" statements cannot be logically derived from factual statements. Population evolution leads to evolutionarily stable strategies, not coherent decision theories.

Comment author: Vladimir_Nesov 23 April 2009 11:46:52AM *  0 points [-]

"Should" statements come from somewhere, somewhere in the world (I'm thinking about that in the context of something close to "The Meaning of Right"). Why do you mention evolution?

Comment author: cousin_it 23 April 2009 08:57:52PM *  1 point [-]

In that post Eliezer just explains in his usual long-winded manner that morality is our brain's morality instinct, not something more basic and deep. So your morality instinct tells you that agents should follow rigorous decision theories? Mine certainly doesn't. I feel much better in a world of quirky/imperfect/biased agents than in a world of strict optimizers. Is there a way to reconcile?

(I often write replies to your comments with a mild sense of wonder whether I can ever deconvert you from Eliezer's teachings, back into ordinary common sense. Just so you know.)

Comment author: Vladimir_Nesov 23 April 2009 09:05:28PM *  0 points [-]

To simplify one of the points a little. There are simple axioms that are easy to accept (in some form). Once you grant them, the structure of decision theory follows, forcing some conclusions you intuitively disbelieve. A step further, looking at the reasons the decision theory arrived at those conclusions may persuade you that you indeed should follow them, that you were mistaken before. No hidden agenda figures into this process, as it doesn't require interacting with anyone, this process may theoretically be wholly personal, you against math.

Comment author: Daniel_Burfoot 22 April 2009 01:17:29PM *  0 points [-]

I don't necessarily believe you, but I would be happy to read what you write :-) I would also be happy to learn more about control theory. To comment further would require me to touch on unmentionable subjects.

Comment author: Psy-Kosh 26 April 2009 07:12:50PM 11 points [-]

I'm kind of thinking of doing a series of posts gently spelling out step by step the arguments for Bayesian decision theory. Part of this is for myself: I've read a while back Omohundro's vulnerability argument, but felt there were missing bits that I had to personally fill in, assumptions I had to sit and think on before I could really say "yes, obviously that has to be true". Some things that I think I can generalize a bit or restate a bit, etc.

So as much as for myself, to organize and clear that up, as for others, I want to do a short series of "How not to be stupid (given unbounded computational power)" In which in each each post I focus on one or a small number of related rules/principles of Bayesian Decision theory and epistemic probabilities, and gently derive those from the "don't be stupid" principle. (Again, based on Omohundro's vulnerability arguments and the usual dutch book arguments for Bayesian stuff, but stretched out and filled in with the details that I personally felt the need to work out, that I felt were missing.)

And I want to do it as a series, rather than a single blob post so I can step by step focus on a small chunk of the problem and make it easier to reference related rules and so on.

Would this be of any use to anyone here though? (maybe a good sequence for beginners, to show one reason why Bayes and Decision Theory is the Right Way?) Or would it be more clutter than anything else?

Comment author: Eliezer_Yudkowsky 26 April 2009 07:27:30PM 1 point [-]

It's got my upvote.

Comment author: Cyan 26 April 2009 07:58:35PM *  0 points [-]

I have a similar plan -- however, I don't know when I'll get to my post and I don't think the material I wanted to discuss would overlap greatly with yours.

Comment author: Vladimir_Nesov 26 April 2009 07:54:10PM 0 points [-]

Can you characterize a bit more concretely what you mean, by zooming in on a tiny part of this planned work? It's no easy task to go from common sense to math, and not shoot your both feet off in the process.

Comment author: Psy-Kosh 27 April 2009 02:16:26AM 1 point [-]

Basically, I want to reconstruct, slowly, the dutch book and vulnerability arguments, but step by step, with all the bits that confused me filled in.

The basic common sense rule that these are built on is "don't accept a situation in which you know you automatically lose" (where "lose" is used to the same level of generality that "win" is in "rationalists win.")

One of the reasons I like dutch book/vulnerability arguments is that each step ends up being relatively straightforward as to getting from that principle to the math. (Sometimes an additional concept needs to be introduced, not so much proven as much as defined and made explicit.)

Comment author: MendelSchmiedekamp 20 April 2009 03:52:16AM 7 points [-]

Willpower building as a fundamental art. And some of the less obvious pit falls. Including the dangers of akrasia circumvention techniques which simply shunt willpower from one place to another and overstraining damaging your willpower reserves.

I need to hunt back down some of the cognitive science research on this before I feel comfortable posting it.

Comment author: pjeby 20 April 2009 04:47:47AM *  10 points [-]

...the dangers of akrasia circumvention techniques which simply shunt willpower from one place to another and overstraining damaging your willpower reserves.

Easy answer: don't use willpower. Ever.

I quit it cold turkey in late 2007, and can count on one hand the number of times I've been tempted to use it since.

(Edit to add: I quit it in order to force myself to learn to understand the things that blocked me, and to learn more effective ways to accomplish things than by pushing through resistance. It worked.)

Comment author: conchis 20 April 2009 12:24:26PM 11 points [-]

don't use willpower. Ever.

Could you do a post on that?

Comment author: PhilGoetz 20 April 2009 05:59:50PM 3 points [-]

Consider cognitive behavioral therapy. You don't get someone to change their behavior by telling them to try really hard. You get them to convince themselves that they will get what they want if they change their behavior.

People do what they want to do. We've gone over this in the dieting threads.

Comment author: MrShaggy 25 April 2009 03:59:19AM 3 points [-]

My idea that I'm not ready to post is now: find a way to force pjeby to write regular posts.

Comment author: MendelSchmiedekamp 20 April 2009 03:38:41PM 1 point [-]

By all means do post. Clarification would be welcome, since we're almost certainly not using the term willpower in the same way.

Comment author: pjeby 20 April 2009 04:36:12PM 3 points [-]

Clarification would be welcome, since we're almost certainly not using the term willpower in the same way.

I'm using it to mean relying on conscious choice in the moment, to overcome preference reversal. Forcing yourself to do something that, at that moment, you'd prefer not to, or to not do something, that you'd prefer to.

What I do instead, is find out why my preference has changed, and either:

  1. Remove that factor from the equation, either by changing something in my head, or in the outside world, or

  2. Choose to agree with my changed preference, for the moment. (Not all preference reversals are problematic, after all!)

Comment author: MendelSchmiedekamp 20 April 2009 04:54:01PM 1 point [-]

From that usage your claim makes much more sense.

Willpower in my usage is more general, when impulses are overridden or circumvented. In your example, it includes the conspicuous consumption of which you describe, but also more subtle costs like the cognitive computation of determining the "why" and forestalling the impulse to remove internal or external factors.

My main point is that willpower is a limited resource that ebbs and flows during cognitive computation, often due to changing costs. But it can be trained up, conserved, and refreshed effectively, if certain hazards can be avoided.

Comment author: ciphergoth 20 April 2009 07:32:06AM 1 point [-]

I repeat my usual plea at this point: please read Breakdown of Will before posting on this.

Comment author: pjeby 20 April 2009 04:04:05PM 2 points [-]

I repeat my usual plea at this point: please read Breakdown of Will before posting on this.

That book doesn't actually contain any solutions to anything, AFAICT. The two useful things I've gotten from it that enhanced my existing models were:

  1. The idea of conditioned appetites, and

  2. The idea that "reward" and "pleasure" are distinct.

There were other things that I learned, of course, like his provocative reward-interval hypothesis that unifies the mechanism of things like addiction, compulsion, itches and pain on a single, time-based scale. But that's only really interesting in an intellectual-curiosity sort of way at the moment; I haven't figured out anything one can DO with it, that I couldn't already do before.

Even the two useful things I mentioned, are mostly useful in explaining why certain things happen, and why certain of my techniques work on certain things. They don't really give me anything that can be turned into actual improvements on the state of the art, although they do suggest some directions for stretching what I apply some things to.

Anyway, if you're already familiar with the basic ideas of discounting and preference reversal, you're not going to get a lot from this book in practical terms.

OTOH, if you think it'd be cool to know how and why your bargains with yourself fail, you might find it interesting reading. But I'm already quite familiar with how that works on a practical level, and the theory really adds nothing to my existing practical advice of, "don't do that!"

(Really, the closest the book comes to giving any practical advice is to vaguely suggest that maybe willpower and intertemporal bargaining aren't such good ideas. Well, not being a scientist, I can state it plainly: they're terrible ideas. You want coherent volition across time, not continuous conflict and bargaining.)

Comment author: MendelSchmiedekamp 20 April 2009 12:05:15PM 0 points [-]

I'll take a closer look at it.

Comment author: matt 20 April 2009 10:20:20PM 0 points [-]
Comment author: swestrup 21 April 2009 10:15:08PM 6 points [-]

I think there's a post somewhere in the following observation, but I'm at a loss as to what lesson to take away from it, or how to present it:

Wherever I work I rapidly gain a reputation for being both a joker and highly intelligent. It seems that I typically act in such a way that when I say something stupid, my co-workers classify it as a joke, and when I say something deep, they classify it as a sign of my intelligence. As best I can figure, its because at one company I was strongly encouraged to think 'outside the box' and one good technique I found for that was to just blurt out the first technological idea that occurred to me when presented with a technological problem, but to do so in a non-serious tone of voice. Often enough the idea is one that nobody else has thought of, or automatically dismissed for what, in retrospect, were insufficient reasons. Other times its so obviously stupid an idea that everyone thinks I'm making a joke. It doesn't hurt that often I do deliberately joke.

I don't know if this is a technique others should adopt or not, but I've found it has made me far less afraid of appearing stupid when presenting ideas.

Comment author: PhilGoetz 20 April 2009 01:58:04AM *  13 points [-]

We are Eliza: A whole lot of what we think is reasoned debate is pattern-matching on other people's sentences, without ever parsing them.

I wrote a bit about this in 1998.

But I'm not as enthused about this topic as I was then, because then I believed that parsing a sentence was reasonable. Now I believe that humans don't parse sentences even when reading carefully. The bird the cat the dog chased chased flew. Any linguist today would tell you that's a perfectly fine English sentence. It isn't. And if people don't parse grammatic structures to just 2 levels of recursion, I doubt recursion, and generative grammars, are involved at all.

Comment author: pangloss 20 April 2009 08:18:02AM 3 points [-]

i believe that linguists would typically claim that it is formed by legitimate rules of English syntax, but point out that there might be processing constraints on humans that eliminate some syntactically well formed sentences from the category of grammatical sentences of English.

Comment author: JulianMorrison 20 April 2009 02:42:11AM 1 point [-]

Eh, I could read it, with some stack juggling. I can even force myself to parse the "buffalo" sentence ;-P

Comment author: William 20 April 2009 07:57:38AM 3 points [-]

You can force yourself to parse the sentence but I suspect that the part of your brain that you use to parse it is different from the one you use in normal reading and in fact closer to the part of the brain you use to solve a puzzle.

Comment author: Risto_Saarelma 14 April 2011 05:46:40AM 0 points [-]

A bit like described in this Stephen Bond piece?

Comment author: MBlume 20 April 2009 03:02:49AM *  4 points [-]

Winning Interpersonally

cousin_it would like to know how rationality has actually helped us win. However, in his article, he completely gives up on rationality in one major area, admitting that "interpersonal relationships are out."

Alex strenuously disagrees, asking "why are interpersonal relationships out? I think rationality can help a great deal here."

(And, of course, I suppose everone knows my little sob-story by now.)

I'd like to get a read from the community on this question.

Is rationality useless -- or worse, a liability when dealing with other human beings? How much does it matter if those human beings are themselves self-professed rationalists? It's been noted that Less Wrong is incredibly male. I have no idea whether this represents an actual gender differential in desire for epistemic rationality, but if it does, it means most male Less Wrongers should not expect to wind up dating rationalists. Does this mean that it is necessary for us to embrace less than accurate beliefs about, eg, our own desirability, that of our partner, various inherently confused concepts of romantic fate, or whatever supernatural beliefs our partners wish do defend? Does this mean it is necessary to make the world more rational, simply so that we can live in it?

(note: this draft was written a while before Gender and Rationality, so there's probably some stuff I'd rewrite to take that into account)

Comment author: pjeby 20 April 2009 03:42:58PM 6 points [-]

Is rationality useless -- or worse, a liability when dealing with other human beings?

Only if you translate this into meaning you've got to communicate like Spock, or talk constantly about things that bore, depress, or shock people, or require them to think when they want to relax. etc.

(That article, btw, is by a guy who figured out how to stop being so "rational" in his personal relationships. Also, as it's a pickup artist's blog, there may be some images or language that may be offensive or NSFW. YMMV.)

Comment author: SoullessAutomaton 20 April 2009 09:31:58PM 3 points [-]

That article seems kind of dodgy to me. Do people really fail to realize that the behaviors he describes are annoying and will alienate people?

The article also gets on my nerves a bit because it assumes that learning to be socially appealing to idiots is 1) difficult and 2) rewarding. Probably I'm just not in his target demographic, so oh well.

Comment author: pjeby 20 April 2009 10:01:51PM -1 points [-]

Do people really fail to realize that the behaviors he describes are annoying and will alienate people?

Well, he did, and I did, so that's a sample right there.

The article also gets on my nerves a bit because it assumes that learning to be socially appealing to idiots is 1) difficult and 2) rewarding.

Sounds like you missed the part of the article where he pointed out that thinking of those people as "idiots" is snobbery on your part. The value of a human being's life isn't really defined by the complexity of the ideas that get discussed in it.

Comment author: SoullessAutomaton 20 April 2009 10:12:52PM *  0 points [-]

The value of a human being's life isn't really defined by the complexity of the ideas that get discussed in it.

No, but the value to me of interacting with them is. I would like nothing more than to know that they live happy and fulfilling lives that do not involve me.

Also, "snobbery" is a loaded term. Is there a reason I am obligated to enjoy the company of people I do not like?

Comment author: pjeby 20 April 2009 10:18:04PM *  4 points [-]

No, but the value to me of interacting with them is. I would like nothing more than to know that they live happy and fulfilling lives that do not involve me.

Sounds like you also missed the part about acquiring an appreciation for the more experiential qualities of life, and for more varieties of people.

Also, "snobbery" is a loaded term.

More so than "idiots"? ;-)

Is there a reason I am obligated to enjoy the company of people I do not like?

Only if you want to increase your opportunities for enjoyment in life, be successful at endeavors that involve other people, reduce the amount of frustration you experience at family gatherings... you know, generally enjoying yourself without needing to have your brain uploaded first. ;-)

Comment author: SoullessAutomaton 20 April 2009 10:50:06PM *  2 points [-]

Sounds like you also missed the part about acquiring an appreciation for the more experiential qualities of life, and for more varieties of people.

I do have an appreciation for those things. I find them enjoyable, distracting, but ultimately unsatisfying. That's like telling someone who eats a healthy diet to acquire an appreciation for candy.

More so than "idiots"? ;-)

Haha, I wondered if you would call me on that. You are right, of course, and for the most part my attitude towards people isn't as negative as I made it sound. I was annoyed by the smug and presumptuous tone of that article.

Only if you want to increase your opportunities for enjoyment in life, be successful at endeavors that involve other people, reduce the amount of frustration you experience at family gatherings... you know, generally enjoying yourself without needing to have your brain uploaded first. ;-)

I do fine enjoying myself as it is, and it's not like I can't work with people--I'm talking only about socializing or other leisure-time activities. And as far as that goes, I absolutely fail to see the benefit of socializing with at least 90% of the people out there. They don't enjoy the things I enjoy and that's fine; why am I somehow deficient for failing to enjoy their activities?

Like I said, I don't think I'm really in the target demographic for that article, and I'm not really sure what you're trying to convince me of, here.

Comment author: pjeby 20 April 2009 11:06:03PM 3 points [-]

I'm not really sure what you're trying to convince me of, here.

I'm not trying to convince you of anything. You asked questions. I answered them.

I do have an appreciation for those things. I find them enjoyable, distracting, but ultimately unsatisfying. That's like telling someone who eats a healthy diet to acquire an appreciation for candy.

Hm, so who's trying to convince who now? ;-)

I was annoyed by the smug and presumptuous tone of that article.

Interesting. I found its tone to be informative, helpful, and compassionately encouraging.

And as far as that goes, I absolutely fail to see the benefit of socializing with at least 90% of the people out there. They don't enjoy the things I enjoy and that's fine; why am I somehow deficient for failing to enjoy their activities?

Who said you were? Not even the article says that. The author wrote, in effect, that he realized that he was being a snob and missing out on things by insisting on making everything be about ideas and rightness and sharing his knowledge, instead of just enjoying the moments, and by judging people with less raw intelligence as being beneath him. I don't see where he said anybody was being deficient in anything.

My only point was that sometimes socializing is useful for winning -- even if it's just enjoying yourself at times when things aren't going your way. I personally found that it limited my life too much to have to have a negative response to purely- or primarily- social interactions with low informational or practical content. Now I have the choice of being able to enjoy them for what they are, which means I have more freedom and enjoyment in my life.

But notice that at no time or place did I use the word "deficiency" to describe myself or anyone else in that. Unfulfilled potential does not equal deficiency unless you judge it to be such.

And if you don't judge or fear it to be such, why would the article set you off? If you were really happy with things as they are, wouldn't you'd have just said, "oh, something I don't need", and went on with your life? Why so much protest?

Comment author: SoullessAutomaton 20 April 2009 11:25:39PM *  1 point [-]

I don't see where he said anybody was being deficient in anything.

This was the impression I got from the article's tone, as well as your previous comments--an impression of "you should do this for your own good". If that was not the intent, I apologize, it is easy to misread tone over the internet.

And if you don't judge or fear it to be such, why would the article set you off? If you were really happy with things as they are, wouldn't you'd have just said, "oh, something I don't need", and went on with your life? Why so much protest?

Because there have been other times where people expressed opinions about what I ought to be doing for enjoyment (cf. the kind of helpfulness described as optimizing others ) and I find it irritating. It's a minor but persistent pet peeve.

I remarked on the article originally mainly because the advice it offered seemed puzzlingly obvious.

Comment author: pjeby 20 April 2009 11:37:00PM 1 point [-]

This was the impression I got from the article's tone, as well as your previous comments--an impression of "you should do this for your own good".

Ah. All I said in the original context was that rationality is only an obstacle in social situations if you used it as an excuse to make everything about you and your ideas/priorities/values, and gave the article as some background on the ways that "rational" people sometimes do that. No advice was given or implied.

As for the article's tone, bear in mind that it's a pickup artist's blog (or more precisely, the blog of a trainer of pickup artists).

So, his audience is people who already want to improve their social skills, and therefore have already decided it's a worthy goal to do so. That's why the article doesn't attempt to make a case for why someone would want to improve their social skills -- it is, after all a major topic of the blog.

Comment author: MendelSchmiedekamp 20 April 2009 03:46:05AM 3 points [-]

I have much I could say on the subject of interpersonal application of rationality (especially to romantic relationships), much of it positive and promising. Unfortunately I don't know yet how well it will match up with rationality as its taught in the OB/LW style - which will decide how easy that is for me to unpack here.

Comment author: MBlume 20 April 2009 03:48:46AM *  2 points [-]

Well, this thread might be a good place to start =)

ETA: I don't think anything should ever be said against an idea which is shown to work. If its epistemic basis is dodgy, we can make a project of shoring it up, but the fact that it works means there's something supporting it, even if we don't yet fully understand it.

Comment author: MendelSchmiedekamp 20 April 2009 03:15:59PM *  0 points [-]

What I do need to do, is to think more clearly (for which now is not the best time) on whether or not the OB/LW flavor of rationality training is something which can communicate that methods I've worked out.

Then it's a matter of trade-offs between forcing the OB/LW flavor or trying to use a related, but better fitting flavor. Which means computing estimates on culture, implicit social biases and expectations. All of which takes time and experiments, much of which I expect to fail.

Which I suppose exemplifies the very basics of what I've found works - individual techniques can be dangerous because when over-generalized there are simply new biases to replace old ones. Instead, forget what you think you know and start re-building your understanding from observation and experiment. Periodically re-question the conclusions you make, and build your understanding from bite size pieces to larger and larger ones.

Which has everything to do with maintaining rational relationships with non-rational, and even deeply irrational people, especially romantic ones. But this takes real work, because each relationship is its own skill, its own "technique", and you need to learn it on the fly. On the plus side, if you get good at it you'll be able to learn how to deal with complex adaptive systems quickly - sort of a meta-skill, as it were.

Comment author: Alicorn 20 April 2009 02:52:59PM 2 points [-]

There are people who will put up with a relentlessly and honestly rationalist approach to one's friendship or other relationship with them. However, they are rare and precious, and I use the words "put up with" instead of "enjoy and respond in kind" because they do it out of affection, and (possibly, in limited situations) admiration that does not inspire imitation. Not because they are themselves rationalists, reacting rationally to the approach, but because they just want to be friends enough to deal.

Comment author: anonymouslyanonymous 20 April 2009 03:17:15PM *  2 points [-]

It's been noted that Less Wrong is incredibly male. I have no idea whether this represents an actual gender differential in desire for epistemic rationality, but if it does, it means most male Less Wrongers should not expect to wind up dating rationalists. Does this mean that it is necessary for us to embrace less than accurate beliefs about, eg, our own desirability, that of our partner, various inherently confused concepts of romantic fate, or whatever supernatural beliefs our partners wish do defend? Does this mean it is necessary to make the world more rational, simply so that we can live in it?

"We commonly speak of the sex 'drive', as if it, like hunger, must be satisfied, or a person will die. Yet there is no evidence that celibacy is in any way damaging to one's health, and it is clear that many celibates lead long, happy lives. Celibacy should be recognised as a valid alternative sexual lifestyle, although probably not everyone is suited to it." -J. S. Hyde, Understanding Human Sexuality, 1986

Source.

Comment author: MBlume 20 April 2009 05:22:00PM 6 points [-]

I have been in a happy, mutually satisfying romantic/sexual relationship once in my life. We had one good year together, and it was The. Best. Year. Of. My. Life. I know people say that when something good happens to you, you soon adjust, and you wind up as happy or as sad as you were before, but that was simply not my experience. I'd give just about anything to have that again. Such is my utility function, and I do not intend to tamper with it.

Comment author: anonymouslyanonymous 20 April 2009 11:07:47PM 6 points [-]

People differ. All I'm trying to say is this: telling someone something is a necessary precondition for their leading a meaningful life, when that is not the case, is likely to create needless suffering.

Comment author: MBlume 21 April 2009 05:15:33PM 1 point [-]

indeed

Comment author: MTGandP 07 July 2015 04:59:12AM 3 points [-]

This is really remarkable to read six years later, since, although I don't know you personally, I know your reputation as That Guy Who Has Really Awesome Idyllic Relationships.

Comment author: PhilGoetz 20 April 2009 06:06:13PM *  3 points [-]

I've read several times that that feelings lasts 2-3 years for most people. That's the conventional wisdom. I've read once that, for some people, it lasts their whole life long. (I mean, once in a scholarly book. I've read it many times in novels.)

Comment author: MBlume 20 April 2009 06:25:18PM 0 points [-]

I rather suspect I might be one of those people. It's been over three years since I first fell for her, and over nine months since those feelings were in any way encouraged, and I still feel that attachment today.

If it turns out I am wired to stay in love for the long term, that'd certainly be a boon under the right circumstances.

Rather sucks now though.

Comment author: Jack 20 April 2009 11:11:54PM 0 points [-]

Don't know if it applies to you. But I imagine a very relevant factor is whether or not you get attached to anyone else.

Comment author: [deleted] 02 October 2012 06:14:38PM 1 point [-]

there is no evidence that celibacy is in any way damaging to one's health

Er...

Comment author: cousin_it 20 April 2009 11:52:51AM *  1 point [-]

To expand on my phrase "interpersonal relationships are out"...

Talking to people, especially the opposite sex, strongly exercises many subconscious mechanisms of our brain. Language, intonation, emotion, posture, you just can't process everything rationally as it comes at you in parallel at high bandwidth. Try dancing from first principles; you'll fail. If you possess no natural talent for it, you have no hope of winning an individual encounter through rationality. You can win by preparation - slowly develop such personal qualities as confidence, empathy and sense of humor. I have chosen this path, it works.

Comment author: Nanani 22 April 2009 01:02:06AM 0 points [-]

Rationality helping in relationships (here used to mean all interpersonal, not just romance) :

  • Use "outside view" to figure out how your interactions look to others; not only to the person you are talking to but also to the social web around you.

  • Focus on the goals, yours and theirs. If these do not match, the relationship is doomed in the long run, romantic or not.

  • Obviously, the whole list of cognitive biases and how to counter them. When you -know- you are doing something stupid, catching yourself rationalizing it and what not, you learn not to do that stupid thing.

Comment author: SoullessAutomaton 20 April 2009 10:05:06AM 0 points [-]

Is rationality useless -- or worse, a liability when dealing with other human beings? How much does it matter if those human beings are themselves self-professed rationalists?

The answers to this are going to depend strongly on how comfortable we are with deception when dealing with irrational individuals.

Comment author: [deleted] 20 April 2009 12:17:44PM *  0 points [-]

deleted

Comment author: MBlume 20 April 2009 03:08:53AM *  10 points [-]

This doesn't even have an ending, but since I'm just emptying out the drafts folder

Memetic Parasitism

I heard a rather infuriating commercial on the radio today. There's no need for me to recount it directly -- we've all heard the type. The narrator spoke of the joy a woman feels in her husband's proposal, of how long she'll remember its particulars, and then, for no apparent reason, transitioned from this to a discussion of shiny rocks, and where we might think of purchasing them.

I hardly think I need to belabor the point, but there is no natural connection between shiny rocks and promises of monogamy. There was not even any particularly strong empirical connection between the two until about a hundred years ago, when some men who made their fortunes selling shiny rocks decided to program us to believe there was.

What we see here is what I shall call memetic parasitism. We carry certain ideas, certain concepts, certain memes to which we attach high emotional valence. In this case, that meme is romantic love, expressed through monogamy. An external agent contrives to derive some benefit by attaching itself to that meme.

Now, it is important to note when describing a Dark pattern that not everything which resembles this parttern is necessarily dark. Carnation attempts to connect itself in our minds to the Burns and Allen show. Well, on reflection, it seems this is right. Carnation did bring us the Burns and Allen show. It paid the salary of each actor, each writer, each technician, who created the show each week. Carnation deserves our gratitude, and any custom which may result from it. Romantic love existed for many centuries before the shiny-rock-sellers came along, and they have done nothing to enhance it.

Of course, I think most of us have seen this pattern before. This comic makes the point rather well, I think.

So, right now, I know that the shiny-rock-sellers want to exploit me, this outrages me, and I choose to have nothing to do with them. How do we excite people's shock and outrage at the way the religions have tried to exploit them?

Comment author: Nanani 22 April 2009 01:07:18AM 3 points [-]

A Series of Defense Against the Dark Arts would not be unwelcome, especially for those who haven't gone through the OB backlog. Voting up.

Comment author: PhilGoetz 20 April 2009 02:09:12AM 17 points [-]

Incommensurate thoughts: People with different life-experiences are literally incapable of understanding each other, because they compress information differently.

Analogy: Take some problem domain in which each data point is a 500-dimensional vector. Take a big set of 500D vectors and apply PCA to them to get a new reduced space of 25 dimensions. Store all data in the 25D space, and operate on it in that space.

Two programs exposed to different sets of 500D vectors, which differ in a biased way, will construct different basic vectors during PCA, and so will reduce all vectors in the future into a different 25D space.

In just this way, two people with life experiences that differ in a biased way (due to eg socioeconomic status, country of birth, culture) will construct different underlying compression schemes. You can give them each a text with the same words in it, but the representations that each constructs internally are incommensurate; they exist in different spaces, which introduce different errors. When they reason on their compressed data, they will reach different conclusions, even if they are using the same reasoning algorithms and are executing them flawlessly. Futhermore, it would be very hard for them to discover this, since the compression scheme is unconscious. They would be more likely to believe that the other person is lying, nefarious, or stupid.

Comment author: ChrisHibbert 20 April 2009 05:19:52AM 5 points [-]

If you're going to write about this, be sure to account for the fact that many people report successful communication in many different ways. People say that they have found their soul-mate, many of us have similar reactions to particular works of literature and art, etc. People often claim that someone else's writing expresses an experience or an emotion in fine detail.

Comment author: Daniel_Burfoot 21 April 2009 06:41:31AM *  4 points [-]

Yeah. I thought about this a lot in the context of the Hanson/Yudkowsky debate about the unmentionable event. As was frequently pointed out, both parties aspired to rationality and were debating in good faith, with the goal of getting closer to the truth.

Their belief was that two rationalists should be able to assign roughly the same probability to the same sequence of events X. That is, if the event X is objectively defined, then the problem of estimating p(X) is an objective one and all rational persons should obtain roughly the same value.

The problem is that we don't - maybe can't - estimate probabilities in isolation of other data. All estimates we make are really of conditional probabilities p(X|D), where D is a person's unique huge background dataaset. The background dataset primes our compression/inference system. To use the Solomonoff idea, our brains construct a reasonably short code for D, and then use the same set of modules that were helpful in compressing D to compress X.

Comment author: David_Gerard 13 April 2011 02:37:31PM *  2 points [-]

Incommensurate thoughts: People with different life-experiences are literally incapable of understanding each other, because they compress information differently.

FWIW, this is one of the problems postmodernism attempts to address: the bit that's a series of exercises in getting into other people's heads to read a given text.

Comment author: Jade 20 August 2012 03:46:51PM *  1 point [-]

Does it work for understanding non-human peoples?

Comment author: John_Maxwell_IV 20 April 2009 05:39:51PM 1 point [-]

No idea what PCA means, but this sounds like a very mathematical way of expressing an idea that is often proposed by left-wingers in other fields.

Comment author: conchis 20 April 2009 08:33:20PM 2 points [-]
Comment author: MendelSchmiedekamp 20 April 2009 03:40:42AM 1 point [-]

I want to write about this too, but almost certainly from a very different angle, dealing with communication and the flow of information. And perhaps at some point I will have the time.

Comment author: JulianMorrison 19 April 2009 11:19:03PM *  10 points [-]

Buddhism.

What it gets wrong. Supernatural stuff - rebirth, karma in the magic sense, prayer. Thinking Buddha's cosmology was ever meant as anything more than an illustrative fable. Renunciation. Equating positive and negative emotions with grasping. Equating the mind with the chatty mind.

What it gets right. Meditation. Karma as consequences. There is no self, consciousness is a brain subsystem, emphasis on the "sub" (Cf. Drescher's "Cartesian Camcorder" and psychology's "system two"). The chatty mind is full of crap and a huge waste of time, unless used correctly. Correct usage includes noticing mostly-subconscious thought loops (Cf. cognitive behavioral therapy). A lot of everyday unreason does stem from grasping, which roughly equates to "magical thinking" or the idea that non-acknowledgment of reality can change it. This includes various vices and dark emotions, including the ones that screw up attempted rationality.

What rationalists should do. Meditate. Notice themselves thinking. Recognize grasping as a mechanism. Look for useful stuff in Buddhism.

Why I can't post. Not enough of an expert. Not able to meditate myself yet.

Comment author: SoullessAutomaton 19 April 2009 11:34:43PM 20 points [-]

It actually strikes me that a series of posts on "What can we usefully learn from X tradition" would be interesting. Most persistent cultural institutions have at least some kind of social or psychological benefit, and while we've considered some (cf. the martial arts metaphors, earlier posts on community building, &c.) there are probably others that could be mined for ideas as well.

Comment author: Drahflow 20 April 2009 08:42:55AM 1 point [-]

I'd be similarly interested in covering philosophical Daoism, the path to wisdom I follow, and believe to be mostly correct.

Things they get wrong: Some of them believe in rebirth, too much reverence for "ancient masters" without good reevaluation, some believe in weird miracles.

Things they get right: Meditation, purely causal view of the world, free will as local illusion, relaxed attitude to pretty much everything (-> less bias from social influence and fear of humiliation), the insight that akrasia is overcome best not by willpower but by adjusting yourself to feel that what you need to do is right, apparently ways to actually help you (at least me) with that, a decent way accept death as something natural.

Comment author: gwern 21 April 2009 03:05:07AM *  3 points [-]

Things they get wrong: Some of them believe in rebirth, too much reverence for "ancient masters" without good reevaluation, some believe in weird miracles.

I kept waiting for 'alchemy' and immortality to show up in your list!

I recently read through an anthology of Taoist texts, and essentially every single thing postdating the Lieh Tzu or the Huai-nan Tzu (-200s) was absolute rubbish, but the preceding texts were great. I've always found this abrupt disintegration very odd.

Comment author: David_Gerard 13 April 2011 02:42:35PM 1 point [-]

I kept waiting for 'alchemy' and immortality to show up in your list!

Know what alchemy's good for? Art and its production. Terrible chemistry, great for creation of art.

Know what's actually a good text for this angle on alchemy? Promethea by Alan Moore, in which he sets out his entire system. (Not only educational, but a fantastic book that is at least as good as his famous '80s stuff.)

Comment author: [deleted] 13 April 2011 02:55:19PM 2 points [-]

Respectfully disagree. I found Promethea to be poorly executed. There was a decent idea somewhere in there, but I think he was too distracted by the magic system to find it.

One exception -- the aside about how the Christian and Muslim Prometheas fought during the Crusades. That was nicely done.

Comment author: blogospheroid 21 April 2009 05:15:29AM *  -1 points [-]

Not enough of an expert on buddhism, but I live its mother religion - hinduism. There are enough similarities for me to comment on a few of your comments.

Rebirth - The question of which part of your self you choose to identify with is a persistent thing in OB/LW. When X and Y conflict and you choose to align yourself with X instead of Y, WHO OR WHAT has made that decision? One might say, the consensus in the mind or more modern answers. The point is that there are desires and impulses which stem from different levels of personality within you. There are animal impulses, basic human impulses(evo-psych), societal drives. There are many levels to you. The persistent question in almost all the dharma religions is - what do you choose to identify with? Even in rebirth, the memories of past lives are erased and the impulses that drove you the greatest at your time of death decide where in the next life you would be. If you are essentially still hungering for stuff, the soul would be sent to stations where that hunger can be satiated. if you are essentially at peace, having lived a full life, you will go to levels that are subtler and presumably more abstract. You become more soul and less body, in a crude sense.

Vedanta does believe in souls. I'm holding out for a consistent theory of everything of physics before i drop my beliefs about that one.

Comment author: PhilGoetz 20 April 2009 01:38:09AM *  10 points [-]

Aumann agreements are pure fiction; they have no real-world applications. The main problem isn't that no one is a pure Bayesian. There are 3 bigger problems:

  • The Bayesians have to divide the world up into symbols in exactly the same way. Since humans (and any intelligent entity that isn't a lookup table) compress information based on their experience, this can't be contemplated until the day when we derive more of our mind's sensory experience from others than from ourselves.
  • Bayesian inference is slow; pure Bayesians would likely be outcompeted by groups that used faster, less-precise reasoning methods, which are not guaranteed to reach agreement. It is unlikely that this limitation can ever be overcome.
  • In the name of efficiency, different reasoners would be highly orthogonal, having different knowledge, different knowledge compression schemes and concepts, etc.; reducing the chances of reaching agreement. (In other words: If two reasoners always agree, you can eliminate one of them.)

This would probably have to wait until May.

Comment author: conchis 20 April 2009 08:50:19PM *  0 points [-]

"Pure fiction" and "no real world application" seem overly strong. Unless you are talking about individuals actually reaching complete agreement, in which case the point is surely true, but relatively trivial.

The interesting question (real world application) is surely how much more we should align our beliefs at the margin.

Also, whether there are any decent quality signals we can use to increase others' perceptions that we are Bayesian, which would then enable us to use each others' information more effectively.

Comment author: Lawliet 20 April 2009 02:02:15AM 4 points [-]

I'd be interested in reading (but not writing) a post about rationalist relationships, specifically the interplay of manipulation, honesty and respect.

Seems more like a group chat than a post, but let's see what you all think.

Comment author: jscn 20 April 2009 07:54:33PM 1 point [-]

I've found the work of Stefan Molyneux to be very insightful with regards to this (his other work has also been pretty influential for me).

You can find his books for free here. I haven't actually read his book on this specific topic ("Real-Time Relationships: The Logic of Love") since I was following his podcasting and forums pretty closely while he was working up to writing it.

Comment author: Lawliet 21 April 2009 03:51:29AM 0 points [-]

Do you think you could summarise it for everybody in a post?

Comment author: jscn 22 April 2009 01:04:44AM 1 point [-]

I'm not confident I could do a good job of it. He proposes that most problems in relationships come from our mythologies about ourselves and others. In order to have good relationships, we have to be able to be honest about what's actually going on underneath those mythologies. Obviously this involves work on ourselves, and we should help our partner to do the same (not by trying to change them, but by assisting them in discovering what is actually going on for them). He calls his approach to this kind of communication the "Real-Time Relationship."

To quote from the book: "The Real-Time Relationship (RTR) is based on two core principles, designed to liberate both you and others in your communication with each other: 1. Thoughts precede emotions. 2. Honesty requires that we communicate our thoughts and feelings, not our conclusions."

For a shorter read on relationships, you might like to try his "On Truth: The Tyranny of Illusion". Be forewarned that, even if you disagree, you may find either book an uncomfortable read.

Comment author: Alicorn 20 April 2009 03:05:19AM 0 points [-]

This sounds very interesting, but I don't think I'm qualified to write it either.

Comment author: steven0461 29 April 2009 03:48:54PM 2 points [-]

Any interest in a top-level post about rationality/poker inter-applications?

Comment author: Alicorn 29 April 2009 03:57:24PM *  0 points [-]

I would be interested if I knew how to play poker. Does your idea generalize to other card games (my favorite is cassino, I'd love to figure out how to interchange cassino strategies with rationality techniques), or is something poker-specific key to what you have to say?

Comment author: steven0461 29 April 2009 04:10:00PM 0 points [-]

Does your idea generalize to other card games

Mostly I think it doesn't. Some of it may generalize to games in general.

Comment author: AllanCrossman 29 April 2009 03:54:11PM 0 points [-]

Yes.

Comment author: Vladimir_Golovin 20 April 2009 01:09:33PM *  2 points [-]

The ideal title for my future post would be this:

How I Confronted Akrasia and Won.

It would be an account of my dealing with akrasia, which so far resulted in eliminating two decade-long addictions and finally being able to act according to my current best judgment. I also hope to describe a practical result of using these techniques (I specified a target in advance and I'm currently working towards it.)

Not posted because:

  1. The techniques are not yet tested even on myself. They worked perfectly for about a couple of months, but I wasn't under any severe stress or distraction. Also, I think they don't involve much willpower (defined as forcing myself to do something), but perhaps they do -- and the only way to find out is to test them in a situation when my 'willpower muscles' are exhausted.

  2. The ultimate test would be the practical result which I plan to achieve using these techniques, and it's not yet reached.

Comment author: orthonormal 23 April 2009 04:43:30AM 5 points [-]

The ideal title for my future post would be this:

How I Confronted Akrasia and Won.

That is (perhaps) unintentionally hilarious, BTW.

Comment author: steven0461 25 April 2009 04:28:49PM *  5 points [-]

I'm vaguely considering doing a post about skeptics. It seems to me they might embody a species of pseudo-rationality, like Objectivists and Spock. (Though it occurs to me that if we define "S-rationality" as "being free from the belief distortions caused by emotion", then "S-rationality" is both worthwhile and something that Spock genuinely possesses.) If their supposed critical thinking skills allow them to disbelieve in some bad ideas like ghosts, Gods, homeopathy, UFOs, and Bigfoot, but also in some good ideas like cryonics and not in other bad ideas like extraterrestrial contact, ecological footprints, p-values, and quantum collapse, then how does the whole thing differ from loyalty to the scientific community? Loyalty to the scientific community isn't the worst thing, but there's no need to present it as independent critical thinking.

I'm sure there are holes in this line of thought, so all criticism is welcome.

Comment author: Annoyance 25 April 2009 04:40:26PM 1 point [-]

"but also in some good ideas like cryonics and not in other bad ideas like extraterrestrial contact, ecological footprints, p-values, and quantum collapse,"

Your listing of 'bad' and 'good' ideas reveals more about your personal beliefs than any supposed failings of skeptics.

Comment author: steven0461 25 April 2009 04:50:46PM *  0 points [-]

OK, so can you name any idea that you think is bad, is accepted/fashionable in science-oriented circles, but is rejected by skeptics for the right reasons?

Comment author: Annoyance 25 April 2009 05:33:56PM 0 points [-]

Whether I think some idea is bad is completely irrelevant. What matters is whether I can show that there are compelling rational reasons to conclude that it's bad. There are lots of claims that I suspect may be true but that I cannot confirm or disprove. I don't complain about skeptics not disregarding the lack of rational support for those claims, nor do I suggest that the nature of skepticism be altered so that my personal sacred cows are spared.

Comment author: JulianMorrison 20 April 2009 12:26:48AM 5 points [-]

What would a distinctively rationalist style of government look like? Cf. Dune's Bene Gesserit government by jury, what if a quorum of rationalists reaching Aumann Agreement could make a binding decision?

What mechanisms could be put in place to stop politics being a mind-killer?

Why not posted: undeveloped idea, and I don't know the math.

Comment author: XFrequentist 10 September 2010 12:25:18AM *  1 point [-]

This is a year late, but it's simply not ok that Futarchy not be mentioned here.

So there you are.

Comment author: blogospheroid 21 April 2009 05:53:36AM *  1 point [-]

Mencius Moldbug believes that if we were living in a world of many mini sovereign corporations who compete for citizens, then they would be forced to be rational. They will try to seek every way to keep paying customers (taxpayers).

Another dune idea could be relevant over here - The god emperor. Have a really long lived guy be king. He cannot take the short cuts that many others do and has to think properly on how to govern.

Addendum - I understand that this is a system builder's perspective, and not an entrepreneur's perspective, i.e. a meta answer rather than an answer, sorry for that.

Comment author: JulianMorrison 21 April 2009 08:39:47AM 0 points [-]

That sounds like an evolution-style search, and he ought to be more careful, evolution only optimizes for the utility function - in this case, the ability to trap and hold "customers".

I would categorize that among the pre-rational systems of government - alongside representative democracy, kings, constitutions, etc. A set of rules or a single decider to do the thinking for a species who can't think straight on their own.

I was more interested in what a rationalist government would be like.

Comment author: byrnema 20 April 2009 05:26:04AM *  5 points [-]

Yet another post from me about theism?

This time, pushing for a more clearly articulated position. Yes, I realize that I am not endearing myself by continuing this line of debate. However, I have good reasons for pursuing.

  • I really like LW and the idea of a place where objective, unbiased truth is The Way. Since I idealistically believe in Aumann’s Agreement theorem, I think that we are only a small number of debates away from agreement.

  • To the extent to which LW aligns itself with a particular point of view, it must be able to defend that view. I don’t want LW to be wrong, and am willing to be a nuisance to make sure.

  • If defending atheism is not a first priority, can we continue using religion as a convenient example of irrationality, even as the enemy of rationality?

  • There is a definite sense that theism is not worth debating, that the case is "open-and-shut". If so, it should be straight-forward to draft a master argument. (Five separate posts of analogies is not strong evidence in my Bayesian calculation that the case is open-and-shut.)

  • A clear and definitive argument against theism would make it possible for theists (and yourselves, as devil's advocates) to debate specific points that are not covered adequately in the argument. (If you are about to downvote me on this comment, think about how important it would be to permit debate on an ideology that is important to this group. Right now it is difficult to debate whether religion is rational because there is no central argument to argue with.)

  • Relative to the ‘typical view’, atheism is radical. How does a religious person visiting this site become convinced that you’re not just a rationality site with a high proportion of atheists?

Comment author: MBlume 20 April 2009 05:50:18AM 17 points [-]

(Um, this started as a reply to your comment but quickly became its own "idea I'm not ready to post" on deconversions and how we could accomplish them quickly.)

Upvoted. It took me months of reading to finally decide I was wrong. If we could put that "aha" moment in one document... well, we could do a lot of good.

Deconversions are tricky though. Did anyone here ever read Kissing Hank's Ass? It's a scathing moral indictment of mainline Christianity. I read it when I was 15 and couldn't sleep for most of a night.

And the next day, I pretty much decided to ignore it. I deconverted seven years later.

I believe the truth matters, and I believe you do a person a favor by deconverting them. But if you've been in for a while, if you've grown dependent on, for example, believing in an eternal life... there's a lot of pain in deconversion, and your mind's going to work hard to avoid it. We need to be prepared for that.

If I were to distill the reason I became an atheist into a few words, it would look something like:

Ontologically fundamental mental things don't make sense, but the human mind is wired to expect them. Fish swim in a sea of water, humans swim in a sea of minds. But mental things are complicated. In order to understand them you have to break them down into parts, something we're still working hard to do. If you say "the universe exists because someone created it," it feels like you've explained something, because agents are part of the most fundamental building blocks from which you build your world. But agency, intelligence, desire, and all the rest, are complicated properties which have a specific history here on earth. Sort of like cheesecake. Or the foxtrot. Or socialism.

If somebody started talking about the earth starting because of cheesecake, you'd wonder where the cheesecake came from. You'd look in a history book or a cook book and discover that the cheesecake has its origins in the roman empire, as a result of, well, people being hungry, and as a result of cows existing, and on and on, and you'd wonder how all those complex causes could produce a cheesecake predating the universe, and what sense it would make cut off from the rich causal net in which we find cheesecakes embedded today. Intelligence should not be any different. Agency trips up Occam's rasor, because humans are wired to expect there to always be agents about. But an explanation of the universe which contains an agent is an incredibly complicated theory, which only presents itself to us for consideration because of our biases.

A complicated theory that you never would have thought of in the first place had you been less biased is not a theory that might still be right -- it's just plain wrong. In the same sense that, if you're looking for a murderer in New York city, and you bring a suspect in on the advice of one of your lieutenants, and then it turns out the lieutenant picked the suspect by reading a horoscope, you have the wrong guy. You don't keep him there because he might be the murderer after all, and you may as well make sure. With all of New York to canvas, you let him go, and you start over. So too with agency-based explanations of the universe's beginning.

I've rambled terribly, and were that a top-level post, or a "master argument" it would have to be cleaned up considerably, but what I have just said is why I am an atheist, and not a clever argument I invented to support it.

Comment author: PhilGoetz 20 April 2009 05:57:19PM *  5 points [-]

If somebody started talking about the earth starting because of cheesecake, you'd wonder where the cheesecake came from. You'd look in a history book or a cook book and discover that the cheesecake has its origins in the roman empire, as a result of, well, people being hungry, and as a result of cows existing, and on and on, and you'd wonder how all those complex causes could produce a cheesecake predating the universe, and what sense it would make cut off from the rich causal net in which we find cheesecakes embedded today. Intelligence should not be any different. Agency trips up Occam's rasor, because humans are wired to expect there to always be agents about. But an explanation of the universe which contains an agent is an incredibly complicated theory, which only presents itself to us for consideration because of our biases.

You're right; yet no one ever sees it this way. Before Darwin, no one said, "This idea that an intelligent creator existed first doesn't simplify things."

Here is something I think would be useful: A careful information-theoretic explanation of why God must be complicated. When you explain, to Christians, that it doesn't make sense to say complexity originated because God created it and God must be complicated, Christians reply (and I'm generalizing here because I've heard these replies so many times) one of 2 things:

  • God is outside of space and time, so causality doesn't apply. (I don't know how to respond to this.)
  • God is not complicated. God is simple. God is the pure essence of being, the First Cause. Think of a perfect circle. That's what God is like.

It shouldn't be hard to explain that, if God knows at least what is in the Encyclopedia Brittanica, God has at least enough complexity to store that information.

Of course, putting this explanation on LW might do no good to anybody.

Comment author: jimmy 21 April 2009 06:12:57AM 0 points [-]

It shouldn't be hard to explain that, if God knows at least what is in the Encyclopedia >Brittanica, God has at least enough complexity to store that information.

Except that the library of all possible books includes the Encyclopedia Brittanica but is far simpler.

Comment author: SoullessAutomaton 21 April 2009 09:43:53AM 0 points [-]

Except that the library of all possible books includes the Encyclopedia Brittanica but is far simpler.

Presumably, God can also distinguish between "the set of books with useful information" and "the set of books containing only nonsense". That is quite complex indeed.

Comment author: jimmy 21 April 2009 04:52:29PM *  0 points [-]

I'm afraid I wasn't clear. I am not arguing that "god" is simple or that it explains anything. I'm just saying that god's knowledge is compressible into an intelligent generator (AI).

The source code isn't likely to be 10 lines, but then again, it doesn't have to include the Encyclopedia of Brittanica to tell you everything that the encyclopedia can once it grows up and learns.

F=m*a is enough to let you draw out all physically possible trajectories from the set of all trajectories, and it is still rather simple.

Comment author: Nick_Tarleton 21 April 2009 04:57:24PM 1 point [-]

It shouldn't be hard to explain that, if God knows at least what is in the Encyclopedia Brittanica, God has at least enough complexity to store that information.

Keep in mind that if this complexity was derived from looking at external phenomena, or at the output of some simple computation, it doesn't reduce the prior probability.

Comment author: pangloss 21 April 2009 06:31:36AM *  0 points [-]

You say: <i>You're right; yet no one ever sees it this way. Before Darwin, no one said, "This idea that an intelligent creator existed first doesn't simplify things."</i>

I may have to look up where before Darwin it gets argued, but I am pretty sure people challenged that before Darwin.

Comment author: David_Gerard 13 April 2011 02:23:48PM 3 points [-]

Ontologically fundamental mental things don't make sense, but the human mind is wired to expect them. Fish swim in a sea of water, humans swim in a sea of minds.

These two sentences, particularly the second, just explained for me why humans expect minds to be ontologically fundamental. Thank you!

Comment author: shokwave 13 April 2011 02:38:28PM 0 points [-]

Thank you for bringing this post to my attention! I'm going to use those lines.

Comment author: John_Maxwell_IV 20 April 2009 05:56:54PM 2 points [-]

It might be why you're an atheist, but do you think it would have swayed your christian self much? I highly doubt that your post would come near to deconverting anyone. Many religious people believe that souls are essential for creativity and intelligence, and they won't accept the "you're wired to see intelligence" argument if they disbelieve in evolution (not uncommon.)

To deconvert people to atheism quickly, I think you need a sledgehammer. I still haven't found a really good one. Here are some areas that might be promising:

  1. Ask them why God won't drop a grapefruit from the sky to show he exists. "He loves me more than I can imagine, right? And more than anything he wants me to know him right? And he's all powerful, right?" To their response: "Why does God consider blindly believing in him in the absence of evidence virtuous? Isn't that sort of think a made-up religion would say about their god to keep people faithful?"

  2. The Problem of Evil: why do innocent babies suffer and die from disease?

  3. I've heard there are lots of contradictions in the bible. Maybe someone who is really dedicated could find some that are really compelling. Personally, I'm not interested enough in this topic to spend time reading religious texts, but more power to those who are.

A few moderately promising ones: Why does God heal cancer patients but not amputees? Why do different religious denominations disagree, when they could just ask God for the answer? Why would a benevolent God send people who happened to be unlucky enough not to hear about him to enternal damnation?

Comment author: Alexandros 23 April 2009 09:42:01PM 4 points [-]

I think a very straightforward contradiction is here: http://skepticsannotatedbible.com/contra/horsemen.html

2 Samuel and 1 Chronicles are supposed to be parallels, telling the same story. Yet one of them probably lost or gained a zero along the way. Many christians that see this are foreced to retreat to a more 'soft' interpretation of the bible that allows for errors in transactiption etc. It's the closest to a quick 'n' dirty sledgehammer I have ever had. And a folow-up: Why hasn't this been discussed in your church? Surely, a group of truthseekers wouldn't shy away from such fundamental criticisms, even to diffuse them.

Comment author: orthonormal 21 April 2009 01:46:04AM 4 points [-]

Problem is, theists of reasonable intelligence spend a good deal of time honing and rehearsing their replies to these. They might be slightly uneasy with their replies, but if the alternative is letting go of all they hold dear, then they'll hold to their guns. Catching them off guard is a slightly better tactic.

Or, to put it another way: if there were such a sledgehammer lying around, Richard Dawkins (or some other New Atheist) would be using it right now. Dawkins uses all the points you listed, and more; and the majority of people don't budge.

Comment author: MBlume 23 April 2009 06:12:31AM 3 points [-]

do you think it would have swayed your christian self much?

Well...it did sway my Christian self. My Christian self generated those arguments and they, with help from Eliezer's writings against self-deception, annihilated that self.

Comment author: orthonormal 23 April 2009 04:40:47AM 1 point [-]

That's as good of an exposition of this point as any I've seen. It deserves to be cleaned up and posted visibly, here on LW or somewhere else.

Comment author: MBlume 23 April 2009 05:00:29AM 0 points [-]

thanks =)

Comment author: Jack 20 April 2009 08:38:27PM *  0 points [-]

So

  1. (x) : x is a possible entity. the more complicated x is the less likely it is to exist controlling for other evidence.

  2. (x): x is a possible entity. the more intelligent x the more complicated x is, controlling for other properties.

  3. God is maximally intelligent.

:. God's existence is maximally unlikely unless there is other evidence or unless it has other properties that make its existence maximally more likely.

(Assume intelligent to refer to the possession of general intelligence)

I think most theists will consent to (1), especially given that its implicit in some of their favorite arguments. (3) They consent to, unless they mean "God" as merely a cosmological constant, or first cause. In which case we're having a completely different debate. So the issue is (2). I'm sure some of the cognitive science types can give evidence for why intelligence is necessarily complicated. There is however, definitive evidence for the correlation of intelligence and complexity. Human brains are vastly more complex than the brains of other animals. Computers get more complicated the more information they hold, etc. It might actually be worth making the distinction, between intelligence and the holding of data. It is a lot easier to see how the more information something contains the more complicated something is since one can just compare two sets of data, one bigger than the other, and see that one is more complicated. Presumably, God needs to contain information on everyone's behavior, the the events that happen at any point in time, prayer requests etc.

Btw, is there a way for me to us symbolic logic notation in xml?

Comment author: MBlume 20 April 2009 09:03:36PM 0 points [-]

hmm...if we can get embedded images to work, we're set.

http://www.codecogs.com/png.latex?\int_a^b\frac{1}{\sqrt{x}}dx

Click that link, and you'll get a rendered png of the LaTeX expression I've placed after the ?. Replace that expression with another and, well, you'll get that too. If you're writing a top-level post, you can use this to pretty quickly embed equations. Not sure how to make it useful in a comment though.

Comment author: Vladimir_Nesov 20 April 2009 09:54:03PM *  4 points [-]

Here it is:

Source code:

![](http://www.codecogs.com/png.latex?\int_a^b\frac{1}{\sqrt{x}}dx)

(It was mentioned before.)

Comment author: MBlume 20 April 2009 10:22:50PM 0 points [-]

awesome =)

Comment author: Nanani 22 April 2009 12:55:20AM 5 points [-]

really like LW and the idea of a place where objective, unbiased truth is The Way.

Something about this phrase bothers me. I think you may be confused as to what is meant by The Way. It isn't about any specific truth, much less Truth. It is about rationality, ways to get at the truth and update when it turns out that truth was incomplete, or facts change, and so on.

Promoting an abstract truth is very much -not- the point. I think it will help your confusion if you can wrap your head around this. My apologies if these words don't help.

Comment author: ciphergoth 20 April 2009 07:27:47AM 5 points [-]

I would prefer us not to talk about theism all that much. We should be testing ourselves against harder problems.

Comment author: MBlume 20 April 2009 07:31:45AM 4 points [-]

Theism is the first, and oldest problem. We have freed ourselves from it, yes, but that does not mean we have solved it. There are still churches.

If we really intend to make more rationalists, theism will be the first hurdle, and there will be an art to clearing that hurdle quickly, cleanly, and with a minimum of pain for the deconverted. I see no reason not to spend time honing that art.

Comment author: ciphergoth 20 April 2009 07:45:11AM 7 points [-]

First, the subject is discussed to death. Second, our target audience at this stage is almost entirely atheists; you start on the people who are closest. Insofar as there are theists we could draw in, we will probably deconvert them more effectively by raising the sanity waterline and having them drown religion without our explicit guidance on the subject; this will also do more to improve their rationality skills than explicit deconversion.

Comment author: MBlume 20 April 2009 07:54:08AM 2 points [-]

sigh You're probably right.

I have a lot of theists in my family and in my social circle, and part of me still wants to view them as potential future rationalists.

Comment author: Vladimir_Nesov 20 April 2009 04:31:33PM *  9 points [-]

We should teach healthy habits of thought, not fight religion explicitly. People should be able to feel horrified by the insanity of supernatural beliefs for themselves, not argued into considering them inferior to the alternatives.

Comment author: gjm 20 April 2009 12:46:09PM 3 points [-]

They are potential future rationalists. They're even (something like) potential present rationalists; that is, someone can be a pretty good rationalist in most contexts while remaining a theist. This is precisely because the internal forces discouraging them from changing can be so strong.

Comment author: JulianMorrison 20 April 2009 12:43:00PM 4 points [-]

When you don't have a science, the first step is to look for patterns. How about assembling an archive of de-conversions that worked?

Comment author: Eliezer_Yudkowsky 20 April 2009 08:29:04PM 2 points [-]

The problem with current techniques is that nothing works reliably. If you can go so high as to have a document that works to deconvert 10% of educated theists, then you can start examining for regularities in what worked and didn't work. The trouble is reaching that high initial bar.

Comment author: David_Gerard 13 April 2011 02:20:40PM 2 points [-]

The problem with current techniques is that nothing works reliably. If you can go so high as to have a document that works to deconvert 10% of educated theists, then you can start examining for regularities in what worked and didn't work. The trouble is reaching that high initial bar.

The first place that springs to mind to look is deconversion-oriented documents that theists warn each other off and which they are given prepared opinions on. The God Delusion is my favourite current example - if you ever hear a theist dissing it, ask if they've read it; it's likely they won't have, and will (hopefully) be embarrassed by having been caught cutting'n'pasting someone else's opinions. What others are there that have produced this effect?

Comment author: Alicorn 13 April 2011 02:24:49PM *  1 point [-]

People are more willing than you might think to openly deride books they admit that they have never read. I know this because I write Twilight fanfiction.

Comment author: wedrifid 13 April 2011 02:54:24PM 4 points [-]

People are more willing than you might think to openly deride books they admit that they have never read.

Almost as if their are other means than just personal experience by which to collect evidence.

"Standing on the shoulders of giants hurling insults at Stephenie Meyer's."

Comment author: Zetetic 13 April 2011 07:45:13PM 1 point [-]

I am very curious about your take on those who attack Twilight for being anti-feminist, specifically for encouraging young girls to engage in male-dependency fantasies.

I've heard tons of this sort of criticism from men and women alike, and since you appear to be the de facto voice of feminism on Lesswrong, I would very much appreciate any insight you might be able to give. Are these accusations simply overblown nonsense in your view? If you have already addressed this, would you be kind enough to post a link?

Comment author: pjeby 21 April 2009 12:17:16PM 2 points [-]

The problem with current techniques is that nothing works reliably. If you can go so high as to have a document that works to deconvert 10% of educated theists, then you can start examining for regularities in what worked and didn't work. The trouble is reaching that high initial bar.

It seems to me that Derren Brown once did some sort of demonstration in which he mass-converted some atheists to theists, and/or vice versa. Perhaps we should investigate what he did. ;-)

Comment author: ciphergoth 21 April 2009 12:26:47PM *  3 points [-]

(Updated following Vladimir_Nesov's comment - thanks!)

Comment author: cabalamat 20 April 2009 05:44:47PM 2 points [-]

Theism is the first, and oldest problem. We have freed ourselves from it, yes, but that does not mean we have solved it. There are still churches.

Indeed. When a community contains more than a critical number of theists, their irrational decision making can harm themselves and the whole community. By deconverting theists, we help them and everyone else.

I'd like to see a discussion on the best ways to deconvert theists.

Comment author: CronoDAS 20 April 2009 10:19:10PM 1 point [-]

Capture bonding seems to be an effective method of changing beliefs.

Comment author: saturn 21 April 2009 05:11:15AM *  0 points [-]

Here's the open-and-shut case against theism: People often tell stories to make themselves feel better. Many of these stories tell of various invisible and undetectable entities. Theory 1 is that all such stories are fabrications; Theory 2 is that an arbitrary one is true and the rest are fabrications. Theory 2 contains more burdensome detail but doesn't predict the data better than Theory 1.

Although to theists this isn't a very convincing argument, it is a knock-down argument if you're a Bayesian wannabe with sane priors.

Comment author: CannibalSmith 20 April 2009 03:42:46AM *  5 points [-]

Some bad ideas on the theme "living to win":

  • Murder is okay. There are consequences, but it's a valid move nonetheless.
  • Was is fun. In fact, it's some of the best fun you can have as long as you don't get disabled or killed permanently.
  • Being a cult leader is a winning move.
  • Learn and practice the so called dark arts!
Comment author: PhilGoetz 20 April 2009 06:00:35PM *  1 point [-]

Was is fun.

"War", I think you mean.

Comment author: byrnema 20 April 2009 03:19:30AM *  4 points [-]

A criticism of practices on LW that are attractive now but which will hinder "the way" to truth in the future; that lead to a religious idolatry of ideas (a common fate of many "in-groups") rather than objective detachment. For example,

(1) linking to ideas in original posts without summarizing the main ideas in your own words and how they apply to the specific context -- as this creates short-cuts in the brain of the reader, if not in the writer

(2) Use of analogies without formally defining the ideas behind them leads to content not only saying more than it intends to (or more than it strictly should) but also having meta-meanings that are attractive but dangerous because they're not explicit. [edit: "formally" was a poor choice of words, "clearly" is my intended meaning]

And any other examples people think of, now and as LW develops.

Comment author: pangloss 20 April 2009 06:01:43AM 2 points [-]

I am not sure I agree with your second concern. Sometimes premature formalization can take us further off track than leaving things with intuitively accessible handles for thinking about them.

Formalizing things, at its best, helps reveal the hidden assumptions we didn't know we were making, but at its worst, it hard-codes some simplifying assumptions into the way we start talking and thinking about the topic at hand. For instance, as soon as we start to formalize sentences of the form "If P, then Q" as material implication, we adopt an analysis of conditionals that straightjackets them into the role of an extensional (truth-functional) semantics. It is not uncommon for someone who just took introductory logic train themselves into forcing natural language into this mold, rather than evaluating the adequacy of the formalism for explaining natural language.

Comment author: PhilGoetz 20 April 2009 06:03:03PM 0 points [-]

(1) linking to ideas in original posts without summarizing the main ideas in your own words and how they apply to the specific context -- as this creates short-cuts in the brain of the reader, if not in the writer

I plan to keep doing this; it saves time.

(2) Use of analogies without formally defining the ideas behind them leads to content not only saying more than it intends to (or more than it strictly should) but also having meta-meanings that are attractive but dangerous because they're not explicit. [edit: "formally" was a poor choice of words, "clearly" is my intended meaning]

Isn't this inherent in using analogies? Are you really saying "Don't use analogies"?

Comment author: byrnema 20 April 2009 08:43:44PM *  1 point [-]

I like analogies. I think they are useful in introducing or explaining an idea, but shouldn't be used as a substitute for the idea.

Comment author: PhilGoetz 20 April 2009 01:47:06AM *  3 points [-]

(rationlism:winning)::(science:results)

We've argued over whether rationalism should be defined as that which wins. I think this is isomorphic to the question whether science should be defined as that which gets good results.

I'd like to look at the history of science in the 16th-18th centuries, to see whether such a definition would have been a help or a hindrance. My priors say that it would have been a hindrance, because it wouldn't have kicked contenders out of the field rapidly.

Under position 1, "science = good results", you would have competition only on the level of individual theories. If the experimental approach to transforming metals won out over mystical Hermetic formulations, that would tell you nothing about whether you would expect an experimental approach to crop fertilization to win out over prayer to the gods.

Position 2, that science is a methodology that turns out to have good results, lets epistemologies, or families of theories, compete. You can group a whole bunch of theories together and call them "scientific", and a whole bunch of other theories together and call them "tradition", and other theories together and call them "mystic", etc.; and test the families against each other. This gives you much stronger statistics. This is probably what happened.

Comment author: pangloss 23 April 2009 05:40:20PM 2 points [-]

The Implications of Saunt Lora's Assertion for Rationalists.

For those who are unfamiliar, Saunt Lora's Assertion comes from the novel Anathem, and expresses the view that there are no genuinely new ideas; every idea has already been thought of.

A lot of purportedly new ideas can be seen as, at best, a slightly new spin on an old idea. The parallels between, Leibniz's views on the nature of possibility and Arnauld's objection, and David Lewis's views on the nature of possibility and Kripke's objection are but one striking example. If there is anything to the claim that we are, to some extent, stuck recycling old ideas, rather than genuinely/interestingly widening the range of views, it seems as though this should have some import for rationalists.

Comment author: David_Gerard 13 April 2011 02:09:51PM 2 points [-]

It would first require a usable definition of "genuinely new" not susceptible to goalpost-shifting and that is actually useful for anything.

Comment author: [deleted] 13 April 2011 09:42:48PM *  2 points [-]

That was part of the joke in Anathem. Saunt Lora's assertion had actually first been stated by Saunt X, but it also occurs in the pre-X writings of Saunt Y, and so on...

Comment author: abigailgem 20 April 2009 03:42:36PM *  2 points [-]

Scott Peck, author of "The Road Less Travelled", which was extremely popular ten years ago, theorised that people became more mature, and could get stuck on a lower level of maturity. From memory, the stages were: 1. Selfish, unprincipled 2. Rule- following 3. Rational 4. Mystical.

Christians could be either rule-following, a stage of maturity most people could leave behind in their teens, needing a big friendly policeman in the sky to tell them what to do- or Mystical.

Mystical people had a better understanding of the World because they did not expect it to be "rational", following a rationally calculable and predictable course. This fits my map in some ways: there are moments when I relate better to someone if I rely on instinct, rather than calculating what is going on, just as I can hit something better if I let my brain do the work rather than try to calculate a parabolic course for the rock.

I am not giving his "stage four" as well as he could. If you like, I would read up in his books, including "Further along the RLT" and "The RLT and beyond" and "The Different Drum" (I used to be a fan, and still hold him in respect).

You could then either decide you were convinced by Scott Peck, or come up with ways to refute him.

Would you like an article on this? Or would you rather just read about him on wikipedia?

Wikipedia says,

Stage IV is the stage where an individual starts enjoying the mystery and beauty of nature. While retaining skepticism, he starts perceiving grand patterns in nature. His religiousness and spirituality differ significantly from that of a Stage II person, in the sense that he does not accept things through blind faith but does so because of genuine belief. Stage IV people are labeled as Mystics.

Comment author: Nanani 22 April 2009 12:50:41AM 0 points [-]

I suspect most, if not all, regulars will dismiss these stages as soon as reading convinces them that the words "rational" and "mystical" are being used in the right sense. That is, few here would be impressed by "enjoying the mystery of nature".

However it might be useful for beginners who haven't read through the relevant sequences. Voted up.

Comment author: PhilGoetz 20 April 2009 01:54:56AM *  2 points [-]

Regarding all the articles we've had about the effectiveness of reason:

Learning about different systems of ethics may be useless. It takes a lot of time to learn all the forms of utilitarianism and their problems, and all the different ethical theories. And all that people do is look until they find one that lets them do what they wanted to do all along.

IF you're designing an AI, then it would be a good thing to do. Or if you've already achieved professional and financial success, and got your personal life in order (whether that's having a wife, having a family, whatever), and are in a position of power, it would be good to do. But if you're a grad student or a mid-level manager, it may be a big waste of time. You've already got big obvious problems to work on; you don't need to study esoteric theories of utility to find a problem to work on.

Comment author: Drahflow 20 April 2009 10:06:25AM 1 point [-]

I have this idea in my mind that my value function differs significantly from that of Elizier. In particular I cannot agree to blowing up Huygens in that Baby-Eater Scenario presented.

To summarize shortly: He gives a scenario which includes the following problem:

Some species A in the universe has as a core value the creation of unspeakable pain in their newborn. Some species B has as core value removal of all pain from the universe. And there is humanity.

In particular there are (besides others) two possible actions: (1): Enable B to kill off all of A, without touching humanity, but kill off a some humans in the process. (2): Block all access between all three species, leading to a continuation of the acts of A, kill significantly less humans in the process.

Elizier claims action 1 is superior over 2, and I cannot agree.

First, some reason why my intuition tells me that Elizier got it wrong: Consider the situation with wild animals, say in Africa. Lions killing gazelles in the thousands. And we are not talking about clean, nice killing, we are talking about taking bites out of living animals. We are talking about slow, agonizing death. And we can be pretty certain about the qualia of that experience, by just extrapolating from brain similarity and our own small painful experiences. Yet, I don't see anybody trying to stop the lions, and I think that is right.

For me the only argument for killing off special A goes like: "I do not like Pain" -> "Pain has negative utility" -> "Incredible pain got incredible negative utility" -> "Incredible pain needs to be removed from the Universe" That sounds wrong to me at the last step. Namely, I feel that our value function ought to (actually does) include a term which discounts things happening far away from us. In particular I think that the value of things happening somewhere in the universe which are (by the scenario) guaranteed not to have any effects on me, are exactly zero.

But more importantly it sounds wrong at the second to last step, claiming that incredible pain has incredible negative utility. Why do we dislike our own pain? Because it is the hardware response closing the feedback loop for our brain in the case of stupidity. It's evolution's way of telling us "don't do that". Why do we dislike pain in other people. Due to sympathy, i.e. due to reduces efficiency of said people in our world.

Do I feel more sympathy towards mammals than towards insects, yes. Do I feel more sympathy towards apes than towards other mammals, again yes. So the trend seems to indicate that I feel sympathy towards complex thinking things.

Maybe that's only because I am a complex thinking thing, but then again, maybe I just value possible computation. Computation generally leads to knowledge, and knowledge leads to more action possibilities. And more diversity in the things carrying out computation will probably lead to more diversity in knowledge, which I consider A Good Thing. Hence, I opt for saving species A, thus creating a lot more of pain, but also some more computation.

As you can probably tell, my line of reasoning is not quite clear yet, but I feel that I got a term in my value function here, that some other people seem to lack, and I wonder whether that's because of misunderstanding or because of different value functions.

Comment author: SoullessAutomaton 20 April 2009 09:24:22PM *  1 point [-]

(1): Enable B to kill off all of A, without touching humanity, but kill off a some humans in the process. (2): Block all access between all three species, leading to a continuation of the acts of A, kill significantly less humans in the process.

I seem to recall that there was no genocide involved; B intended to alter A such that they would no longer inflict pain on their children.

The options were:

  1. B modifies both A and humanity to eliminate pain; also modifies all three races to include parts of what the other races value.
  2. Central star is destroyed, the crew dies; all three species continue as before.
  3. Human-colonized star is destroyed; lots of humans die, humans remain as before otherwise; B is assumed to modify A as planned above to eliminate pain.
Comment author: PhilGoetz 20 April 2009 05:50:20PM 0 points [-]

Elizier claims action 1 is superior over 2, and I cannot agree.

Does Eliezer's position depend on the fact that group A is using resources that could otherwise be used by group B, or by humans?

Group B's "eliminate pain" morality itself has mind-bogglingly awful consequences if you think it through.

Comment author: MartinB 21 April 2009 04:08:51PM 1 point [-]

Putting together a rationalist toolset. Including all the methods one needs to know, but also, and very much so the real world knowledge that helps to get along or ahead in life. Doesnt have to be reinvented, but pointed out and evaluated.

In short: I expect members of the rationality movement to dress well when its needed. To be in reasonable shape. To /not/smoke. Know about positive psychology. Know about how to deal with people. And find ways to be a rational & happy person.

Comment author: thomblake 21 April 2009 04:28:15PM 1 point [-]

I expect members of the rationality movement to dress well when its needed. To be in reasonable shape. To /not/smoke. Know about positive psychology. Know about how to deal with people. And find ways to be a rational & happy person.

I disagree with much of this. Not sure what 'reasonable shape' means, but I'm not above ignoring physical fitness in the pursuit of more lofty goals. Same with smoking - while I'll grant that there are more efficient ways to get the benefits of smoking, for an established smoker-turned-rationalist it might not be worth the time and effort to quit. And I'm also not sure what you mean by 'positive psychology'.

Comment author: MartinB 22 April 2009 10:53:56PM 1 point [-]

Just some examples. It might be that smoking is not as bad as its currently presented. Optimizing lifestyle for higher chances of survival seams reasonable to me, but might not be everyones choice. Want i do not find usefull in any instance are grumpy rationalists that scorn on the whole world. Do you agree with the importance of 'knowledge about the real world'?

Regarding positive psychology. Look um Daniel Gilbert and Martin Seligman. Both gave nice talks on Ted.com and have something to say about happiness.

Comment author: pre 20 April 2009 10:03:15AM 1 point [-]

memetic engineering

The art of manipulating the media, especially news, and public opinion. Sometimes known as "spin-doctoring" I guess, but I think the memetic paradigm is probably a more useful one to attack it from.

I'd love to understand that better than I do. Understanding it properly would certainly help with evangelism.

I fear that very few people really do grok it though, certainly I wouldn't be capable of writing much relevant about it yet.

Comment author: Emile 20 April 2009 12:57:43PM 1 point [-]

I'm not sure that's something worth studying here - it's kinda sneaky and unethical.

Comment author: pre 20 April 2009 01:35:33PM 6 points [-]

Oh, so we're just using techniques which win without being sneaky? Isn't 'sneaky' a good, winning strategy?

Rationality's enemies are certainly using these techniques. Should we not study them, if only with a view to finding an antidote?

Comment author: Simulacra 21 April 2009 02:25:37AM 4 points [-]

I would say it is certainly something worth studying, the understanding of how it works would be invaluable. We can decide if we want to use it to further our goals or not once we understand it (hopefully not before, using something you don't understand is generally a bad thing imho). If we decide not to use it, the knowledge would help us educate others and perhaps prevent the 'dark ones' from using it.

Perhaps something a la James Randi, create an ad whose first half uses some of the techniques and whose second half explains the mechanisms used to control inattentive viewers with a link to somewhere with more information on understanding how its done and why people should care.

Comment author: Alicorn 19 April 2009 10:11:55PM 1 point [-]

I have more to say about my cool ethics course on weird forms of utilitarianism, but unlike with Two-Tier Rationalism, I'm uncertain of how germane the rest of these forms are to rationalism.

I have a lot to say about the Reflection Principle but I'm still in the process of hammering out my ideas regarding why it is terrible and no one should endorse it.

Comment author: SoullessAutomaton 19 April 2009 10:41:02PM 2 points [-]

I have a lot to say about the Reflection Principle but I'm still in the process of hammering out my ideas regarding why it is terrible and no one should endorse it.

I'm not sure what Reflection Principle you're referring to here. Google suggests two different mathematical principles but I'm not seeing how either of those would be relevant on LW, so perhaps you mean something else?

Comment author: Alicorn 20 April 2009 02:04:04AM *  2 points [-]

The Reflection Principle, held by some epistemologists to be a constraint on rationality, holds that if you learn that you will believe some proposition P in the future, you should believe P now. There is complicated math about what you should do if you have degree of credence X in the proposition that you will have credence Y in proposition P in the future and how that should affect your current probability for P, but that's the basic idea. An alternate formulation is that you should treat your future self as a general expert.

Comment author: SoullessAutomaton 20 April 2009 09:26:27AM 1 point [-]

Reminds me a bit of the LW (ab)use of Aumann's Agreement Theorem, heh--at least with a future self you've got a high likelihood of shared priors.

Anyway, I know arguments from practicality are typically missing the point in philosophical arguments, but this seems to be especially useless--even granting the principle, under what circumstance could you become aware of your future beliefs with sufficient confidence to change your current beliefs based on such?

It seems to boil down mostly to "If you're pretty sure you're going to change your mind, get it over with". Am I missing something here?

Comment author: Alicorn 20 April 2009 02:59:54PM 1 point [-]

Well, that's one of my many issues with the principle - it's practically useless, except in situations that it has to be formulated specifically to avoid. For instance, if you plan to get drunk, you might know that you'll consider yourself a safe driver while you are (in the future) drunk, but that doesn't mean you should now consider your future, drunk self a safe driver. Sophisticated statements of Reflection explicitly avoid situations like this.

Comment author: infotropism 19 April 2009 11:44:54PM *  0 points [-]

I have an idea I need to build up about simplicity, how to build your mind and beliefs up incrementally, layer by layer, how perfection is achieved not when there's nothing left to add, but nothing left to remove, how simple minded people are sometimes being the ones to declare simple, true ideas others lost sight of, people who're too clever and sophisticate, whose knowledge is like a card house, or a bag of knots, genius, learning, growing up, creativity correlated with age, zen. But I really need to do a lot more searching about that before I can put something together.

Edit : and if I post that here, that's because if someone else wants to dig that idea, and work on it with me, that'd be with pleasure.

Comment author: ciphergoth 20 April 2009 07:38:05AM 0 points [-]

Do you understand Solomonoff's Universal Prior?

Comment author: infotropism 20 April 2009 10:02:07AM *  0 points [-]

Not the mathematical proof.

But the idea that if you don't yet have data bound to observation, then you decide the probability of a prior by looking at its complexity.

Complexity, defined as looking up the smallest compressed bitstring program for each possible turing machines (and that is the reason why it's intractable unless you have infinite computational ressources yes ?), that can be said to generate this prior as the output of being run on that machine.

The longest the bitstring, the less likely the prior (and this has to do with the idea you can make more permutations on larger bit strings, like, a one bit string can be in two states, a two bit one can be in 2 states, a 3 bit one in 2 exp 3 states, and so on.).

Then you somehow average the probabilities for all pairs of (turing machine + program) into one overall probability ?

(I'd love to understand that formally)

Comment author: PhilGoetz 20 April 2009 01:57:19AM *  0 points [-]

I'm skeptical of the concept as presented here. Anything with the phrase "how perfection is achieved" sets up a strong prior in my mind saying it is completely off-base.

More generally, in evolution and ecosystems I see that simplicity is good temporarily, as long as you retain the ability to experiment with complexity. Bacteria rapidly simplify themselves to adapt to current conditions, but they also experiment a lot and rapidly acquire complexity when environmental conditions change. When conditions stabilize, they then gradually throw off the acquired complexity until they reach another temporary simple state.

Comment author: infotropism 20 April 2009 02:23:09AM *  0 points [-]

So maybe, to rephrase the idea then, we want to strive, to achieve something as close as we can to perfection; optimality ?

If we do, we may then start laying the bases, as well as collecting practical advices, general methods, on how to do that. Though not a step by step absolute guide to perfection, rather, the first draft of one idea that would be helpful in aiming towards optimality.

edit : also, that's a st Exupery quote, that illustrates the idea, I wouldn't mean it that literally, not as more than a general guideline.

Comment author: swestrup 21 April 2009 10:22:48PM 1 point [-]

Lurkers and Involvement.

I've been thinking that one might want to make a post, or post a survey, that attempts to determine how much folks engage with the contents on less wrong.

I'm going to assume that there are far more lurkers than commenters, and far more commenters than posters, but I'm curious as to how many minutes, per day, folks spend on this site.

For myself, I'd estimate no more than 10 or 15 minutes but it might be much less than that. I generally only read the posts from the RSS feed, and only bother to check the comments on one in 5. Even then, if there's a lot of comments, I don't bother reading most of them.

One of the reasons I don't post is that I often find it takes me 20-30 minutes to put my words into a shape that I feel is up to the rather high standard of posting quality here, and I'm generally not willing to commit that much of my time to this site.

I think the question of how much of their time an average person thinks a site is worth to them is an important metric, and one we may wish to try to measure with an eye to increasing the average for this site.

Heck, that might even get me posting more often.

Comment author: Simulacra 21 April 2009 03:04:47AM 1 point [-]

There has been some calling for applications of rationality; how can this help me win? This combined with the popularity and discussion surrounding "Stuck in the middle with Bruce" gave me an idea for a potential series of posts relating to LWers pastimes of choice. I have a feeling most people here have a pastime, and if rationalists should win there should be some way to map the game to rational choices.

Perhaps articles discussing "how rational play can help you win at x" and "how x can help you think more rationally" would be worthwhile. I'm sure there are games or hobbies that multiple people share (as was discovered relating to Magic) and even if no one has played a certain game the knowledge gained from it should be generalizable and used elsewhere (as was the concept of a Bruce).

I might be able to do a piece on Counter-Strike (perhaps generalized to FPS style games) although I haven't played in several years.

I know I would be interested in more discussion of how Magic and rationality work together. In fact I almost went out an picked up a deck to try it out again (haven't played since Ice Age when I was but a child) but remembered I don't know anyone I could play with right now anyway, which is probably why I don't.

Comment author: beoShaffer 09 September 2012 03:28:13AM 0 points [-]

I started an article on the psychology of rationalization, but stopped due to a mixture of time constrains and not finding many high quality studies.

Comment author: pangloss 27 April 2009 07:03:12PM 0 points [-]

The Verbal Overshadowing effect, and how to train yourself to be a good explicit reasoner.

Comment author: dclayh 21 April 2009 10:48:23PM *  0 points [-]

Contents of my Drafts folder:

  • A previous version of my Silver Chair post, with more handwringing about why one might not stop someone from committing suicide.
  • A post about my personal motto ("per rationem, an nequequam", or "through reason, or not at all"), and how Eliezer's infamous Newcomb-Box post did and didn't change my perspective on what rationality means.
  • A listing of my core beliefs related to my own mind, beliefs/desires/etc., with a request for opinions or criticism.
  • A post on why animals in particular and any being not capable of rationality/ethics in general don't get moral consideration. This one isn't posted only because my proof has a hole.
Comment author: steven0461 21 April 2009 04:53:13PM 0 points [-]

Great thread idea.

Frequentist Pitfalls:

Bayesianism vs Frequentism is one thing, but there are a lot of frequentist-inspired misinterpretations of the language of hypothesis testing that all statistically competent people agree are wrong. For example, note that: p-values are not posteriors (interpreting them this way usually overstates the evidence against the null, see also Lindley's paradox), p-values are not likelihoods, confidence doesn't mean confidence, likelihood doesn't mean likelihood, statistical significance is a property of test results not hypotheses, statistical significance is not effect size, statistical significance is not effect importance, p-values aren't error probabilities, the 5% threshold isn't magical.

In a full post I'd flesh all of these out, but I'm considering not doing so because it's kind of basic and it turns out Wikipedia already discusses most of this surprisingly well.

Comment author: Vladimir_Nesov 21 April 2009 05:16:19PM *  0 points [-]

More generally, semantics of the posteriors, and of probability in general, comes from the semantics of the rest of the model, of prior/state space/variables/etc. It's incorrect to attribute any kind of inherent semantics to a model, which as you note happens quite often, when frequentist semantics suddenly "emerges" in probabilistic models. It is a kind of mind projection fallacy, where the role of the territory is played by math of the mind.

Comment author: steven0461 21 April 2009 05:25:26PM *  1 point [-]

To return to something we discussed in the IRC meetup: there's a simple argument why commonly-known rationalists with common priors cannot offer each other deals in a zero-sum game. The strategy "offer the deal iff you have evidence of at least strength X saying the deal benefits you" is defeated by all strategies of the form "accept the deal iff you have evidence of at least strength Y > X saying the deal benefits you", so never offering and never accepting if offered should be the only equilibrium.

This is completely off-topic unless anyone thinks it would make an interesting top-level post.

ETA: oops, sorry, this of course assumes independent evidence; I think it can probably be fixed?

Comment author: MBlume 20 April 2009 03:13:41AM 0 points [-]

Thank you for this post -- I feel a bit lighter somehow having all those drafts out in the open.

Comment author: byrnema 20 April 2009 03:41:12AM 2 points [-]

I also think this post is a great idea. I've written 3 posts that were, objectively, not that appropriate here. Perhaps I should have waited until I knew more about what was going on at LW, but I'm one of those students that has to ask a lot of questions at first, and I'm not sure how long it would have taken me to learn the things that I wanted to know otherwise.

Along these lines, what do you guys think of encouraging new members (say, with Karma < 100) to always mini-post here first? [In Second Life, there was a 'sandbox area' where you could practice building objects.] Here on LW, it would be (and is, now that it's here) immensely useful to try out your topic and gauge what the interest would be on LW.

Personally, I would have been happy to post my posts (all negative scoring) somewhere out of the main throughfare, as I was just fishing for information and trying to get a feel for the group rather than wanting to make top-level statements.

Comment author: MBlume 20 April 2009 03:46:35AM *  2 points [-]

I definitely think this is a post that should stay visible, whether because we start stickying a few posts or because somebody reposts this monthly.

I don't know whether we need guidelines about when people should post here, and definitely don't think we need a karma cutoff. I think just knowing it's here should be enough.

Comment author: MBlume 20 April 2009 02:58:50AM 0 points [-]

Let's empty out my draft folder then....

Counterfactual Mugging v. Subjective Probability

A couple weeks ago, Vladimir Nesov stirred up the biggest hornet's nest I've ever seen on LW by introducing us to the Counterfactual Mugging scenario.

If you didn't read it the first time, please do -- I don't plan to attempt to summarize. Further, if you don't think you would give Omega the $100 in that situation, I'm afraid this article will mean next to nothing to you.

So, those still reading, you would give Omega the $100. You would do so because if someone told you about the problem now, you could do the expected utility calculation 0.5U(-$100)+0.5U(+$10000)>0. Ah, but where did the 0.5s come from in your calculation? Well, Omega told you he flipped a fair coin. Until he did, there existed a 0.5 probability of either outcome. Thus, for you, hearing about the problem, there is a 0.5 probability of your encountering the problem as stated, and a 0.5 probability of your encountering the corresponding situation, in which Omega either hands you $10000 or doesn't, based on his prediction. This is all very fine and rational.

So, new problem. Let's leave money out of it, and assume Omega hands you 1000 utilons in one case, and asks for them in the other -- exactly equal utility. What if there is an urn, and it contains either a red or a blue marble, and Omega looks, maybe gives you the utility if the marble is red, and asks for it if the marble is blue? What if you have devoted considerable time to determining whether the marble is red or blue, and your subjective probability has fluctuated over the course of you life? What if, unbeknownst to you, a rationalist community has been tracking evidence of the marble's color (including your own probability estimates), and running a prediction market, and Omega now shows you a plot of the prices over the past few years?

In short, what information do you use to calculate the probability you plug into the EU calculation?

Comment author: ciphergoth 20 April 2009 07:37:13AM 6 points [-]

This is probably mean of me, but I'd prefer if the next article about Omega's various goings-on set out to explain why I should care about what the rational thing to do in Omega-ish situations.

Comment author: Vladimir_Nesov 20 April 2009 10:04:59AM *  0 points [-]

So, those still reading, you would give Omega the $100.

It's a little too strong, I think you shouldn't give away the $100, because you are just not reflectively consistent. It's not you who could've ran the expected utility calculation to determine that you should give it away. If you persist, by the time you must do the action it's not in your interest anymore, it's a lost cause. And that is a subject of another post that has been lying in draft form for some time.

If you are strong enough to be reflectively consistent, then ...

In short, what information do you use to calculate the probability you plug into the EU calculation?

You use your prior for probabilistic valuation, structured to capture expected subsequent evidence on possible branches. According to evidence and possible decisions on each branch, you calculate expected utility of all of the possible branches, find a global feasible maximum, and perform a component decision from it that fits the real branch. The information you have doesn't directly help in determining the global solution, it only shows which of the possible branches you are on, and thus which role should you play in the global decision, that mostly applies to the counterfactual branches. This works if the prior/utility is something inside you, worse if you have to mine information from the real branch for it in the process. Or, for more generality, you can consider yourself cooperating with your counterfactual counterparts.

The crux of the problem is that you care about counterfactuals; once you attain this, the rest is business as usual. When you are not being reflectively consistent, you let the counterfactual goodness slip away from your fingers, turning to myopically optimizing only what's real.