The title says it all.
The title says it all.
Less Wrong Rationality Quotes since April 2009, sorted by points.
This version copies the visual style and preserves the formatting of the original comments.
I already wrote a top-level comment about the original raw text version of this, but my access logs suggested that EDITs of older comments only reach a very few people. See that comment for a bit more detail.
Less Wrong Rationality Quotes since April 2009, sorted by points.
Pre-alpha, one hour of work. I plan to improve it.
EDIT: Here is the source code. 80 lines of python. It makes raw text output, links and formatting are lost. It would be quite trivial to do nice and spiffy html output.
EDIT2: I can do html output now. It is nice and spiffy, but it has some CSS bug. After the fifth quote it falls apart. This is my first time with CSS, and I hope it is also the last. Could somebody help me with this? Thanks.
EDIT3: Bug resolved. I wrote another top-level comment. about the final version, because my access logs suggested that the EDITs have reached only a very few people. Of course, an alternative explanation is that everybody who would have been interested in the html version already checked out the txt version. We will soon find out which explanation is the correct one.
You Are Not So Smart is a great little blog that covers many of the same topics as LessWrong, but in a much more bite-sized format and with less depth. It probably won't offer much to regular/long-time LW readers, but it's a great resource to give to friends/family who don't have the time/energy demanded by LW.
As an old quote from DanielLC says, consequentialism is "the belief that doing the right thing makes the world a better place". I now present some finger exercises on the topic:
Is it okay to cheat on your spouse as long as (s)he never knows?
If you have already cheated and managed to conceal it perfectly, is it right to stay silent?
If your spouse asks you to give a solemn promise to never cheat, and you know you will cheat perfectly discreetly, is it right to give the promise to make them happy?
If your wife loves you, but you only stay in the marriage because of the child, is it right to assure the wife you still love her?
If your husband loves you, but doesn't know the child isn't his, is it right to stay silent?
The people from #4 and #5 are actually married to each other. They seem to be caught in an uncomfortable equilibrium of lies. Would they have been better off as deontologists?
While you're thinking about these puzzles, be extra careful to not write the bottom line in advance and shoehorn the "right" conclusion into a consequentialist frame. For example, eliminating lies doesn't "make the world a better place" unless it actually makes people happier; claiming so is just concealed deontologism.
Nisan:
Feel free to be skeptical until I've tried it.
Trouble is, this is not just a philosophical matter, or a matter of personal preference, but also an important legal question. Rather than convincing cuckolded men that they should accept their humiliating lot meekly -- itself a dubious achievement, even if it were possible -- your arguments are likely to be more effective in convincing courts and legislators to force cuckolded men to support their deceitful wives and the offspring of their indiscretions, whether they want it or not. (Just google for the relevant keywords to find reports of numerous such rulings in various jurisdictions.)
Of course, this doesn't mean that your arguments shouldn't be stated clearly and discussed openly, but when you insultingly refer to opposing views as "chauvinism," you engage in aggressive, warlike language against men who end up completely screwed over in such cases. To say the least, this is not appropriate in a rational discussion.
An idea that may not stand up to more careful reflection.
Evidence shows that people have limited quantities of willpower – exercise it too much, and it gets used up. I suspect that rather than a mere mental flaw, this is a design feature of the brain.
Man is often called the social animal. We band together in groups – families, societies, civilizations – to solve our problems. Groups are valuable to have, and so we have values – altruism, generosity, loyalty – that promote group cohesion and success. However, it doesn’t pay to be COMPLETELY supportive of the group. Ultimately the goal is replication of your genes, and though being part of a group can further that goal, it can also hinder it if you take it too far (sacrificing yourself for the greater good is not adaptive behavior). So it pays to have relatively fluid group boundaries that can be created as needed, depending on which group best serves your interest. And indeed, studies show that group formation/division is the easiest thing in the world to create – even groups chosen completely at random from a larger pool will exhibit rivalry and conflict.
Despite this, it’s the group-supporting values that form the higher level valu...
I have a question about why humans see the following moral positions as different when really they look the same to me:
1) "I like to exist in a society that has punishments for non-cooperation, but I do not want the punishments to be used against me when I don't cooperate."
2) "I like to exist in a society where beings eat most of their children, and I will, should I live that long, want to eat most of my children too, but, as a child, I want to be exempt from being a target for eating."
Potential top-level article, have it mostly written, let me know what you think:
Title: The hard problem of tree vibrations [tentative]
Follow-up to: this comment (Thanks Adelene Dawner!)
Related to: Disputing Definitions, Belief in the Implied Invisible
Summary: Even if you agree that trees normally make vibrations when they fall, you're still left with the problem of how you know if they make vibrations when there is no observational way to check. But this problem can be resolved by looking at the complexity of the hypothesis that no vibrations happen. Such a hypothesis is predicated on properties specific to the human mind, and therefore is extremely lengthy to specify. Lacking the type and quantity of evidence necessary to locate this hypothesis, it can be effectively ruled out.
Body: A while ago, Eliezer Yudkowsky wrote an article about the "standard" debate over a famous philosophical dilemma: "If a tree falls in a forest and no one hears it, does it make a sound?" (Call this "Question Y.") Yudkowsky wrote as if the usual interpretation was that the dilemma is in the equivocation between "sound as vibration" and "sound as auditory ...
How many lottery tickets would you buy if the expected payoff was positive?
This is not a completely hypothetical question. For example, in the Euromillions weekly lottery, the jackpot accumulates from one week to the next until someone wins it. It is therefore in theory possible for the expected total payout to exceed the cost of tickets sold that week. Each ticket has a 1 in 76,275,360 (i.e. C(50,5)*C(9,2)) probability of winning the jackpot; multiple winners share the prize.
So, suppose someone draws your attention (since of course you don't bother following these things) to the number of weeks the jackpot has rolled over, and you do all the relevant calculations, and conclude that this week, the expected win from a €1 bet is €1.05. For simplicity, assume that the jackpot is the only prize. You are also smart enough to choose a set of numbers that look too non-random for any ordinary buyer of lottery tickets to choose them, so as to maximise your chance of having the jackpot all to yourself.
Do you buy any tickets, and if so how many?
If you judge that your utility for money is sublinear enough to make your expected gain in utilons negative, how large would the jackpot have to be at those odds before you bet?
Question: whats your experience with stuff that seems new agy at first look, like yoga, meditation and so on. Anything worth trying?
Case in point: i read in Feynmans book about deprivation tanks, and recently found out that they are available in bigger cities. (Berlin, germany in my case.) will try and hopefully enjoy that soon. Sadly those places are run by new-age folks that offer all kinds of strange stuff, but that might not take away from the experience of floating in a sensory empty space.
Less Wrong Book Club and Study Group
(This is a draft that I propose posting to the top level, with such improvements as will be offered, unless feedback suggests it is likely not to achieve its purposes. Also reply if you would be willing to co-facilitate: I'm willing to do so but backup would be nice.)
Do you want to become stronger in the way of Bayes? This post is intended for people whose understanding of Bayesian probability theory is currently between levels 0 and 1, and who are interested in developing deeper knowledge through deliberate practice.
Our...
This one came up at the recent London meetup and I'm curious what everyone here thinks:
What would happen if CEV was applied to the Baby Eaters?
My thoughts are that if you applied it to all baby eaters, including the living babies and the ones being digested, it would end up in a place that adult baby eaters would not be happy. If you expanded it to include all babyeaters that ever existed, or that would ever exist, knowing the fate of 99% of them, it would be a much more pronounced effect. So what I make of all this is that either CEV is not utility-function-neutral, or that the babyeater morality is objectively unstable when aggregated.
Thoughts?
While searching for literature on "intuition", I came upon a book chapter that gives "the state of the art in moral psychology from a social-psychological perspective". This is the best summary I've seen of how morality actually works in human beings.
The authors gives out the chapter for free by email request, but to avoid that trivial inconvenience, I've put up a mirror of it.
ETA: Here's the citation for future reference: Haidt, J., & Kesebir, S. (2010). Morality. In S. Fiske, D. Gilbert, & G. Lindzey (Eds.) Handbook of Social ...
Many are calling BP evil and negligent, has there actually been any evidence of criminal activities on their part? My first guess is that we're dealing with hindsight bias. I am still casually looking into it, but I figured some others here may have already invested enough work into it to point me in the right direction.
Like any disaster of this scale, it may be possible to learn quite a bit from it, if we're willing.
I have been reading the “economic collapse” literature since I stumbled on Casey’s “Crisis Investing” in the early 1980s. They have really good arguments, and the collapses they predict never happen. In the late-90s, after reading “Crisis Investing for the Rest of the 1990s”, I sat down and tried to figure out why they were all so consistently wrong.
The conclusion I reached was that humans are fundamentally more flexible and more adaptable than the collapse-predictors' arguments allowed for, and society managed to work-around all the regulations and other ...
Regrets and Motivation
Almost invariably everything is larger in your imagination than in real life, both good and bad, the consequences of mistakes loom worse, and the pleasure of gains looks better. Reality is humdrum compared to our imaginations. It is our imagined futures that get us off our butts to actually accomplish something.
And the fact that what we do accomplish is done in the humdrum, real world, means it can never measure up to our imagined accomplishments, hence regrets. Because we imagine that if we had done something else it could have measu...
Inspired by Chapter 24 of Methods of Rationality, but not a spoiler: If the evolution of human intelligence was driven by competition between humans, why aren't there a lot of intelligent species?
About CEV: Am I correct that Eliezer's main goal would be to find the one utility function for all humans? Or is it equally plausible to assume that some important values cannot be extrapolated coherently, and that a Seed-AI would therefore provide several results clustered around some groups of people?
[edit]Reading helps. This he has actually discussed, in sufficient detail, I think.[/edit]
Let's get this thread going:
I'd like to ask everyone what probability bump they give to an idea given that some people believe it.
This is based on the fact that out of the humongous idea-space, some ideas are believed by (groups of) humans, and a subset of those are believed by humans and are true. (of course there exist some that are true and not yet believed by humans.)
So, given that some people believe X, what probability do you give for X being true, compared to Y which nobody currently believes?
Saw this over on Bruce Schneier's blog, it seemed worth reposting here. Wharton’s “Quake” Simulation Game Shows Why Humans Do Such A Poor Job Planning For & Learning From Catastrophes (link is to summary, not original article, as original article is a bit redundant). Not so sure how appropriate the "learning from" part of the title is, as they don't seem to mention people playing the game more than once, but still quite interesting.
What solution do people prefer to Pascal's Mugging? I know of three approaches:
1) Handing over the money is the right thing to do exactly as the calculation might indicate.
2) Debiasing against overconfidence shouldn't mean having any confidence in what others believe, but just reducing our own confidence; thus the expected gain if we're wrong is found by drawing from a broader reference class, like "offers from a stranger".
3) The calculation is correct, but we must pre-commit to not paying under such circumstances in order not to be gamed.
What have I left out?
Because it was used somewhere I calculated my own weights worth in gold - it is about 3.5 million EUR. In silver you can get me for 50.000 EUR. The Mythbusters recently build a lead balloon and had it fly. Some proverb don't hold up to reality and/or engineering.
The number of heart attacks has fallen since England imposed a smoking ban
http://www.economist.com/node/16333351?story_id=16333351&fsrc=scn/tw/te/rss/pe
In the Singularity Movement, Humans Are So Yesterday (long Singularity article in this Sunday's NY Times; it isn't very good)
Heuristics and biases in charity
http://www.sas.upenn.edu/~baron/papers/charity.pdf (I considered making this link as a top-level post.)
I've recently begun downvoting comments that are at -2 rating regardless of my feelings about them. I instituted this policy after observing that a significant number of comments reach -2 but fail to be pushed over to -3, which I'm attributing to the threshold being too much of a psychological barrier for many people to penetrate; they don't want to be 'the one to push the button'. This is an extension of my RL policy of taking 'the last' of something laid out for communal use (coffee, donuts, cups, etc.). If the comment thread really needs to be visible, ...
Does countersignaling actually happen? Give me examples.
I think most claims of countersignaling are actually ordinary signaling, where the costly signal is foregoing another group and the trait being signaled is loyalty to the first group. Countersignaling is where foregoing the standard signal sends a stronger positive message of the same trait to the usual recipients.
My recent comment on Reddit reminded me of WrongTomorrow.com - a site that was mentioned briefly here a while ago, but which I haven't seen much since.
Try it out, guys! LongBets and PredictionBook are good, but they're their own niche; LongBets won't help you with pundits who don't use it, and PredictionBook is aimed at personal use. If you want to track current pundits, WrongTomorrow seems like the best bet.
Anyone know how to defeat the availability heuristic? Put another way, does anyone have advice on how to deal with incoherent or insane propositions while losing as little personal sanity as possible? Is there such a thing as "safety gloves" for dangerous memes?
I'm asking because I'm currently studying for the California Bar exam, which requires me to memorize hundreds of pages of legal rules, together with their so-called justifications. Of course, in many cases the "justifications" are incoherent, Orwellian doublespeak, and/or tend...
I found an interesting paper on Arxiv earlier today, by the name of Closed timelike curves via post-selection: theory and experimental demonstration.
It promises such lovely possibilities as quick solutions to NP-complete problems, and I'm not entirely sure the mechanism couldn't also be used to do arbitrary amounts of computation in finite time. Certainly worth a read.
However, I don't understand quantum mechanics well enough to tell how sane the paper is, or what the limits of what they've discovered are. I'm hoping one of you does.
Clippy-related: The Paper Clips Project is run by a school trying to overcome scope insensitivity by representing the eleven million people killed in the Holocaust with one paper clip per victim.
Maybe this has been discussed before -- if so, please just answer with a link.
Has anyone considered the possibility that the only friendly AI may be one that commits suicide?
There's great diversity in human values, but all of them have in common that they take as given the limitations of Homo sapiens. In particular, the fact that each Homo sapiens has roughly equal physical and mental capacities to all other Homo sapiens. We have developed diverse systems of rules for interpersonal behavior, but all of them are built for dealing with groups of people lik...
OpenPCR: DNA amplification for anyone
http://www.thinkgene.com/openpcr-dna-amplification-for-anyone/
Some clips on the dark-side epistemology of history done by Christian apologists by Robert M Price, who describes himself as a Christian Atheist.
Not sure how worthwhile Price is to listen to in general though.
A question about Bayesian reasoning:
I think one of the things that confused me the most about this is that Bayesian reasoning talks about probabilities. When I start with Pr(My Mom Is On The Phone) = 1/6, its very different from saying Pr(I roll a one on a fair die) = 1/6.
In the first case, my mom is either on the phone or not, but I'm just saying that I'm pretty sure she isn't. In the second, something may or may not happen, but its unlikely to happen.
Am I making any sense... or are they really the same thing and I'm over complicating?
Supposedly (actual study) milk reduces catechin level in bloodstream.
Other research says: "does not!"
Really hot (but not scalded) milk tastes fantastic to me, so I've often added it to tea. I don't really care much about the health benefits of tea per se; I'm mostly curious if anyone has additional evidence one way or the other.
The surest way to resolve the controversy is to replicate the studies until it's clear that some of them were sloppy, unlucky, or lies. But, short of that, should I speculate that perhaps some people are opposed to milk ...
I'd like to hear what people think about calibrating how many ideas you voice versus how confident you are in their accuracy.
For lack of a better example, i recall eliezer saying that new open threads should be made quadanually, once per season, but this doesn't appear to be the optimum amount. Perhaps eliezer misjudged how much activity they would receive and how fast they would fill up or he has a different opinion on how full a thread has to be to make it time for a new thread, but for sake of the example lets assume that eliezer was wrong and that the...
I'd like to see a picture of this LW cannon!
Rather than waste time doing both your cannon request and Roko's Fallacyzilla request, I just combined them into one picture of the Less Wrong Cannon attacking Fallacyzilla.
...now someone take Photoshop away from me, please.
I noticed that two seconds after I put it up and it's now corrected...er...incorrected. (Today I learned - my brain has that same annoying auto-correct function as Microsoft Word)
Are there cases where occam's razor results in a tie, or is there proof that it always yields a single solution?
Do we have a unique method for generating priors?
Eliezer has written about using the length of the program required to produce it, but this doesn't seem to be unique; you could have languages that are very efficient for one thing, but long-winded for another. And quantum computing seems to make it even more confusing.
How to write a "Malcolm Gladwell Bestseller" (an MGB)
http://blog.jgc.org/2010/06/how-to-write-malcolm-gladwell.html
How can I understand quantum physics? All explanations I've seen are either:
I don't think the subject is inherently difficult. For example quantum computing and quantum cryptography can be explained to anyone with basic clue and basic math skills. (example)
On the other hand I haven't seen any quantum physics explanation that did even as little a...
Blog about common cognitive biases - one post per bias:
For those of you who have been following my campaign against the "It's impossible to explain this, so don't expect me to!" defense: today, the campaign takes us to a post on anti-reductionist Gene Callahan's blog.
In case he deletes the entire exchange thus far (which he's been known to do when I post), here's what's transpired (paragraphing truncated):
Me: That's not the moral I got from the story. The moral I got was: Wow, the senior monk sure sucks at describing the generating function ("rules") for his actions. Maybe he doesn't really...
Am I alone in my desire to upload as fast as possible and drive away to asteroid belt when thinking about current FAI and CEV proposals? They take moral relativity to its extreme: let's god decide who's right...
I'm starting to think SIAI might have to jettison the "singularity" terminology (for the intelligence explosion thesis) if it's going to stand on its own. It's a cool word, and it would be a shame to lose it, but it's become associated too much with utopian futurist storytelling for it to accurately describe what SIAI is actually working on.
Edit: Look at this Facebook group. This sort of thing is just embarrassing to be associated with. "If you are feeling brave, you can approach a stranger in the street and speak your message!" Seriously, this practically is religion. People should be raising awareness of singularity issues not as a prophecy but as a very serious and difficult research goal. It doesn't do any good to have people going around telling stories about the magical Future-Land while knowing nothing about existential risks or cognitive biases or friendly AI issues.
I'm not sure that your criticism completely holds water. Friendly AI is simply put only a worry that has convinced some Singularitarians. One might not be deeply concerned about that (Possible example reasons: 1) You expect uploading to come well before general AI. 2) you think that the probable technical path to AI will force a lot more stages of AI of much lower intelligence which will be likely to give us good data for solving the problem)
I agree that this Facebook group does look very much like something one would expect out of a missonizing religion... (read more)