The Dilbert Challenge: you are working in a company in the world of Dilbert. Your pointy-haired boss comes to you with the following demand:
"One year from today, our most important customer will deliver us a request for a high-quality reliable software system. Your job and the fate of the company depends on being able to develop and deploy that software system within two weeks of receipt of the specifications. Unfortunately we don't currently know any of the requirements. Get started now."
I submit that this preposterous demand is really a deep intellectual challenge, the basic form of which arises in many different endeavors. For example, it's reasonable to believe that at some point in the future, humanity will face an existential threat. Given that we will not know the exact nature of that threat until it's almost upon us, how can we prepare for it today?
On the Care and Feeding of Rationalist Hardware
Many words have been spent here in improving rationalist software -- training patterns of thought which will help us to achieve truth, and reliably reach our goals.
Assuming we can still remember so far back, Eliezer once wrote:
But if you have a brain, with cortical and subcortical areas in the appropriate places, you might be able to learn to use it properly. If you're a fast learner, you might learn faster - but the art of rationality isn't about that; it's about training brain machinery we all have in common
Rationality does not require big impressive brains any more than the martial arts require big bulging muscles. Nonetheless, I think it would be rare indeed to see a master of the martial arts willfully neglecting the care of his body. Martial artists of the wisest schools strive to improve their bodies. They jog, or lift weights. They probably do not smoke, or eat unhealthily. They take care of their hardware so that the things they do will be as easy as possible.
So, what hacks exist which enable us to improve and secure the condition of our mental hardware? Some important areas that come to mind are:
Incommensurate thoughts: People with different life-experiences are literally incapable of understanding each other, because they compress information differently.
Analogy: Take some problem domain in which each data point is a 500-dimensional vector. Take a big set of 500D vectors and apply PCA to them to get a new reduced space of 25 dimensions. Store all data in the 25D space, and operate on it in that space.
Two programs exposed to different sets of 500D vectors, which differ in a biased way, will construct different basic vectors during PCA, and so will reduce all vectors in the future into a different 25D space.
In just this way, two people with life experiences that differ in a biased way (due to eg socioeconomic status, country of birth, culture) will construct different underlying compression schemes. You can give them each a text with the same words in it, but the representations that each constructs internally are incommensurate; they exist in different spaces, which introduce different errors. When they reason on their compressed data, they will reach different conclusions, even if they are using the same reasoning algorithms and are executing them flawlessly. Futhermore, it would be very hard for them to discover this, since the compression scheme is unconscious. They would be more likely to believe that the other person is lying, nefarious, or stupid.
There is a topic I have in mind that could potentially require writing a rather large amount, and I don't want to do that unless there is some interest, rather than suddenly dumping a massive essay on LW without any prior context. The topic is control theory (the engineering discipline, not anything else those words might suggest). Living organisms are, I say (following Bill Powers, who I've mentioned before) built of control systems, and any study of people that does not take that into account is unlikely to progress very far. Among the things I might write about are these:
Purposes and intentions are the set-points of control systems. This is not a metaphor or an analogy.
Perceptions do not determine actions; instead, actions determine perceptions. (If that seems either unexceptionable or obscure, try substituting "stimulus" for "perception" and "response" for "action".)
Control systems do not, in general, work by predicting what action will produce the intended perception. They need not make any predictions at all, nor contain any model of their environment. They require neither utility measures, nor Bayesian or any other form of in
We are Eliza: A whole lot of what we think is reasoned debate is pattern-matching on other people's sentences, without ever parsing them.
I wrote a bit about this in 1998.
But I'm not as enthused about this topic as I was then, because then I believed that parsing a sentence was reasonable. Now I believe that humans don't parse sentences even when reading carefully. The bird the cat the dog chased chased flew. Any linguist today would tell you that's a perfectly fine English sentence. It isn't. And if people don't parse grammatic structures to just 2 levels of recursion, I doubt recursion, and generative grammars, are involved at all.
I'm kind of thinking of doing a series of posts gently spelling out step by step the arguments for Bayesian decision theory. Part of this is for myself: I've read a while back Omohundro's vulnerability argument, but felt there were missing bits that I had to personally fill in, assumptions I had to sit and think on before I could really say "yes, obviously that has to be true". Some things that I think I can generalize a bit or restate a bit, etc.
So as much as for myself, to organize and clear that up, as for others, I want to do a short series of "How not to be stupid (given unbounded computational power)" In which in each each post I focus on one or a small number of related rules/principles of Bayesian Decision theory and epistemic probabilities, and gently derive those from the "don't be stupid" principle. (Again, based on Omohundro's vulnerability arguments and the usual dutch book arguments for Bayesian stuff, but stretched out and filled in with the details that I personally felt the need to work out, that I felt were missing.)
And I want to do it as a series, rather than a single blob post so I can step by step focus on a small chunk of the problem and make it easier to reference related rules and so on.
Would this be of any use to anyone here though? (maybe a good sequence for beginners, to show one reason why Bayes and Decision Theory is the Right Way?) Or would it be more clutter than anything else?
This doesn't even have an ending, but since I'm just emptying out the drafts folder
Memetic Parasitism
I heard a rather infuriating commercial on the radio today. There's no need for me to recount it directly -- we've all heard the type. The narrator spoke of the joy a woman feels in her husband's proposal, of how long she'll remember its particulars, and then, for no apparent reason, transitioned from this to a discussion of shiny rocks, and where we might think of purchasing them.
I hardly think I need to belabor the point, but there is no natural connection between shiny rocks and promises of monogamy. There was not even any particularly strong empirical connection between the two until about a hundred years ago, when some men who made their fortunes selling shiny rocks decided to program us to believe there was.
What we see here is what I shall call memetic parasitism. We carry certain ideas, certain concepts, certain memes to which we attach high emotional valence. In this case, that meme is romantic love, expressed through monogamy. An external agent contrives to derive some benefit by attaching itself to that meme.
Now, it is important to note when describing a Dark pattern t...
Buddhism.
What it gets wrong. Supernatural stuff - rebirth, karma in the magic sense, prayer. Thinking Buddha's cosmology was ever meant as anything more than an illustrative fable. Renunciation. Equating positive and negative emotions with grasping. Equating the mind with the chatty mind.
What it gets right. Meditation. Karma as consequences. There is no self, consciousness is a brain subsystem, emphasis on the "sub" (Cf. Drescher's "Cartesian Camcorder" and psychology's "system two"). The chatty mind is full of crap and a huge waste of time, unless used correctly. Correct usage includes noticing mostly-subconscious thought loops (Cf. cognitive behavioral therapy). A lot of everyday unreason does stem from grasping, which roughly equates to "magical thinking" or the idea that non-acknowledgment of reality can change it. This includes various vices and dark emotions, including the ones that screw up attempted rationality.
What rationalists should do. Meditate. Notice themselves thinking. Recognize grasping as a mechanism. Look for useful stuff in Buddhism.
Why I can't post. Not enough of an expert. Not able to meditate myself yet.
It actually strikes me that a series of posts on "What can we usefully learn from X tradition" would be interesting. Most persistent cultural institutions have at least some kind of social or psychological benefit, and while we've considered some (cf. the martial arts metaphors, earlier posts on community building, &c.) there are probably others that could be mined for ideas as well.
Aumann agreements are pure fiction; they have no real-world applications. The main problem isn't that no one is a pure Bayesian. There are 3 bigger problems:
This would probably have to wait until May.
I think there's a post somewhere in the following observation, but I'm at a loss as to what lesson to take away from it, or how to present it:
Wherever I work I rapidly gain a reputation for being both a joker and highly intelligent. It seems that I typically act in such a way that when I say something stupid, my co-workers classify it as a joke, and when I say something deep, they classify it as a sign of my intelligence. As best I can figure, its because at one company I was strongly encouraged to think 'outside the box' and one good technique I found for...
Willpower building as a fundamental art. And some of the less obvious pit falls. Including the dangers of akrasia circumvention techniques which simply shunt willpower from one place to another and overstraining damaging your willpower reserves.
I need to hunt back down some of the cognitive science research on this before I feel comfortable posting it.
...the dangers of akrasia circumvention techniques which simply shunt willpower from one place to another and overstraining damaging your willpower reserves.
Easy answer: don't use willpower. Ever.
I quit it cold turkey in late 2007, and can count on one hand the number of times I've been tempted to use it since.
(Edit to add: I quit it in order to force myself to learn to understand the things that blocked me, and to learn more effective ways to accomplish things than by pushing through resistance. It worked.)
Some bad ideas on the theme "living to win":
What would a distinctively rationalist style of government look like? Cf. Dune's Bene Gesserit government by jury, what if a quorum of rationalists reaching Aumann Agreement could make a binding decision?
What mechanisms could be put in place to stop politics being a mind-killer?
Why not posted: undeveloped idea, and I don't know the math.
I'm vaguely considering doing a post about skeptics. It seems to me they might embody a species of pseudo-rationality, like Objectivists and Spock. (Though it occurs to me that if we define "S-rationality" as "being free from the belief distortions caused by emotion", then "S-rationality" is both worthwhile and something that Spock genuinely possesses.) If their supposed critical thinking skills allow them to disbelieve in some bad ideas like ghosts, Gods, homeopathy, UFOs, and Bigfoot, but also in some good ideas like cryonic...
Yet another post from me about theism?
This time, pushing for a more clearly articulated position. Yes, I realize that I am not endearing myself by continuing this line of debate. However, I have good reasons for pursuing.
I really like LW and the idea of a place where objective, unbiased truth is The Way. Since I idealistically believe in Aumann’s Agreement theorem, I think that we are only a small number of debates away from agreement.
To the extent to which LW aligns itself with a particular point of view, it must be able to defend that view. I don’t w
(Um, this started as a reply to your comment but quickly became its own "idea I'm not ready to post" on deconversions and how we could accomplish them quickly.)
Upvoted. It took me months of reading to finally decide I was wrong. If we could put that "aha" moment in one document... well, we could do a lot of good.
Deconversions are tricky though. Did anyone here ever read Kissing Hank's Ass? It's a scathing moral indictment of mainline Christianity. I read it when I was 15 and couldn't sleep for most of a night.
And the next day, I pretty much decided to ignore it. I deconverted seven years later.
I believe the truth matters, and I believe you do a person a favor by deconverting them. But if you've been in for a while, if you've grown dependent on, for example, believing in an eternal life... there's a lot of pain in deconversion, and your mind's going to work hard to avoid it. We need to be prepared for that.
If I were to distill the reason I became an atheist into a few words, it would look something like:
Ontologically fundamental mental things don't make sense, but the human mind is wired to expect them. Fish swim in a sea of water, humans swim in a sea of minds...
A criticism of practices on LW that are attractive now but which will hinder "the way" to truth in the future; that lead to a religious idolatry of ideas (a common fate of many "in-groups") rather than objective detachment. For example,
(1) linking to ideas in original posts without summarizing the main ideas in your own words and how they apply to the specific context -- as this creates short-cuts in the brain of the reader, if not in the writer
(2) Use of analogies without formally defining the ideas behind them leads to content not o...
Winning Interpersonally
cousin_it would like to know how rationality has actually helped us win. However, in his article, he completely gives up on rationality in one major area, admitting that "interpersonal relationships are out."
Alex strenuously disagrees, asking "why are interpersonal relationships out? I think rationality can help a great deal here."
(And, of course, I suppose everone knows my little sob-story by now.)
I'd like to get a read from the community on this question.
Is rationality useless -- or worse, a liability when deal...
I'd be interested in reading (but not writing) a post about rationalist relationships, specifically the interplay of manipulation, honesty and respect.
Seems more like a group chat than a post, but let's see what you all think.
(rationlism:winning)::(science:results)
We've argued over whether rationalism should be defined as that which wins. I think this is isomorphic to the question whether science should be defined as that which gets good results.
I'd like to look at the history of science in the 16th-18th centuries, to see whether such a definition would have been a help or a hindrance. My priors say that it would have been a hindrance, because it wouldn't have kicked contenders out of the field rapidly.
Under position 1, "science = good results", you would have compe...
The ideal title for my future post would be this:
How I Confronted Akrasia and Won.
It would be an account of my dealing with akrasia, which so far resulted in eliminating two decade-long addictions and finally being able to act according to my current best judgment. I also hope to describe a practical result of using these techniques (I specified a target in advance and I'm currently working towards it.)
Not posted because:
The techniques are not yet tested even on myself. They worked perfectly for about a couple of months, but I wasn't under any severe str
Regarding all the articles we've had about the effectiveness of reason:
Learning about different systems of ethics may be useless. It takes a lot of time to learn all the forms of utilitarianism and their problems, and all the different ethical theories. And all that people do is look until they find one that lets them do what they wanted to do all along.
IF you're designing an AI, then it would be a good thing to do. Or if you've already achieved professional and financial success, and got your personal life in order (whether that's having a wife, having...
The Implications of Saunt Lora's Assertion for Rationalists.
For those who are unfamiliar, Saunt Lora's Assertion comes from the novel Anathem, and expresses the view that there are no genuinely new ideas; every idea has already been thought of.
A lot of purportedly new ideas can be seen as, at best, a slightly new spin on an old idea. The parallels between, Leibniz's views on the nature of possibility and Arnauld's objection, and David Lewis's views on the nature of possibility and Kripke's objection are but one striking example. If there is anything to...
Scott Peck, author of "The Road Less Travelled", which was extremely popular ten years ago, theorised that people became more mature, and could get stuck on a lower level of maturity. From memory, the stages were:
Christians could be either rule-following, a stage of maturity most people could leave behind in their teens, needing a big friendly policeman in the sky to tell them what to do- or Mystical.
Mystical people had a better understanding of the World because they did not expect it ...
I have this idea in my mind that my value function differs significantly from that of Elizier. In particular I cannot agree to blowing up Huygens in that Baby-Eater Scenario presented.
To summarize shortly: He gives a scenario which includes the following problem:
Some species A in the universe has as a core value the creation of unspeakable pain in their newborn. Some species B has as core value removal of all pain from the universe. And there is humanity.
In particular there are (besides others) two possible actions: (1): Enable B to kill off all of A, with...
Lurkers and Involvement.
I've been thinking that one might want to make a post, or post a survey, that attempts to determine how much folks engage with the contents on less wrong.
I'm going to assume that there are far more lurkers than commenters, and far more commenters than posters, but I'm curious as to how many minutes, per day, folks spend on this site.
For myself, I'd estimate no more than 10 or 15 minutes but it might be much less than that. I generally only read the posts from the RSS feed, and only bother to check the comments on one in 5. Even then...
Putting together a rationalist toolset. Including all the methods one needs to know, but also, and very much so the real world knowledge that helps to get along or ahead in life. Doesnt have to be reinvented, but pointed out and evaluated.
In short: I expect members of the rationality movement to dress well when its needed. To be in reasonable shape. To /not/smoke. Know about positive psychology. Know about how to deal with people. And find ways to be a rational & happy person.
There has been some calling for applications of rationality; how can this help me win? This combined with the popularity and discussion surrounding "Stuck in the middle with Bruce" gave me an idea for a potential series of posts relating to LWers pastimes of choice. I have a feeling most people here have a pastime, and if rationalists should win there should be some way to map the game to rational choices.
Perhaps articles discussing "how rational play can help you win at x" and "how x can help you think more rationally" would ...
memetic engineering
The art of manipulating the media, especially news, and public opinion. Sometimes known as "spin-doctoring" I guess, but I think the memetic paradigm is probably a more useful one to attack it from.
I'd love to understand that better than I do. Understanding it properly would certainly help with evangelism.
I fear that very few people really do grok it though, certainly I wouldn't be capable of writing much relevant about it yet.
I have more to say about my cool ethics course on weird forms of utilitarianism, but unlike with Two-Tier Rationalism, I'm uncertain of how germane the rest of these forms are to rationalism.
I have a lot to say about the Reflection Principle but I'm still in the process of hammering out my ideas regarding why it is terrible and no one should endorse it.
I started an article on the psychology of rationalization, but stopped due to a mixture of time constrains and not finding many high quality studies.
It seem to me possible to create a safe oracle AI. Suppose that you have a sequence predictor which is a good approximation of Solomonoff induction but which run in reasonable time. This sequence predictor can potentially be really useful (for example, predict future siai publications from past siai publications then proceed to read the article which give a complete account of Friendliness theory...) and is not dangerous in itself. The question, of course, is how to obtain such a thing.
The trick rely on the concept of program predictor. A program predictor...
Contents of my Drafts folder:
Great thread idea.
Frequentist Pitfalls:
Bayesianism vs Frequentism is one thing, but there are a lot of frequentist-inspired misinterpretations of the language of hypothesis testing that all statistically competent people agree are wrong. For example, note that: p-values are not posteriors (interpreting them this way usually overstates the evidence against the null, see also Lindley's paradox), p-values are not likelihoods, confidence doesn't mean confidence, likelihood doesn't mean likelihood, statistical significance is a property of test results not hypo...
What an argument for atheism would look like.
Yesterday, I posted a list for reasons for why I think it would be a good idea to articulate a position on atheism.
I think many people are interested in theory in developing some sort of deconversion programme (for example, see this one), and perhaps creating a library of arguments and counter-arguments for debates with theists.
While I have no negative opinion of these projects, my ambition is much more modest. In a cogent argument for atheism,there would be no need to debate particular arguments. It would be ...
Thank you for this post -- I feel a bit lighter somehow having all those drafts out in the open.
Let's empty out my draft folder then....
Counterfactual Mugging v. Subjective Probability
A couple weeks ago, Vladimir Nesov stirred up the biggest hornet's nest I've ever seen on LW by introducing us to the Counterfactual Mugging scenario.
If you didn't read it the first time, please do -- I don't plan to attempt to summarize. Further, if you don't think you would give Omega the $100 in that situation, I'm afraid this article will mean next to nothing to you.
So, those still reading, you would give Omega the $100. You would do so because if someone told you...
I have an idea I need to build up about simplicity, how to build your mind and beliefs up incrementally, layer by layer, how perfection is achieved not when there's nothing left to add, but nothing left to remove, how simple minded people are sometimes being the ones to declare simple, true ideas others lost sight of, people who're too clever and sophisticate, whose knowledge is like a card house, or a bag of knots, genius, learning, growing up, creativity correlated with age, zen. But I really need to do a lot more searching about that before I can put something together.
Edit : and if I post that here, that's because if someone else wants to dig that idea, and work on it with me, that'd be with pleasure.
"Telling more than we can know" Nisbett & Wilson
I saw this on Overcoming Bias a while ago thanks to Pete Carlton www.lps.uci.edu/~johnsonk/philpsych/readings/nisbett.pdf
I hope you all read this. What are the implications? Can you tell me a story ;)
Here is my story and I am sticking to it!
I have a sailing friend that makes up physics from whole cloth it is frightening and amusing to watch an almost impossible to correct without some real drama. It seems one only has to be close in horse shoes and hand grenades and life.
I've often had half-finished LW post ideas and crossed them off for a number of reasons, mostly they were too rough or undeveloped and I didn't feel expert enough. Other people might worry their post would be judged harshly, or feel overwhelmed, or worried about topicality, or they just want some community input before adding it.
So: this is a special sort of open thread. Please post your unfinished ideas and sketches for LW posts here as comments, if you would like constructive critique, assistance and checking from people with more expertise, etc. Just pile them in without worrying too much. Ideas can be as short as a single sentence or as long as a finished post. Both subject and presentation are on topic in replies. Bad ideas should be mined for whatever good can be found in them. Good ideas should be poked with challenges to make them stronger. No being nasty!