Rationality Quotes July 2013
Another month has passed and here is a new rationality quotes thread. The usual rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (425)
Neil Stephenson, Cryptonomicon
The more immediate question is, however: Does his positive gut reaction enable him to engage more openly with the situation, thus deriving greater value from it than he might have done otherwise?
--Keith Humphreys
(I hope that the general point is appreciated instead of starting a politics discussion! I think these kind of proxy arguments are a very common failure mode in all areas of life.)
I don't think the conclusion follows.
It's entirely consistent to believe that the level of something is too high and has been too high for a long time, yet to not oppose it in principle.
The correct question to detect if that's really their objection is not "have they ever thought that the level is too low"--the correct question is "would they ever under any circumstances think that the level is too low". Of course, you're not going to get as many "no" answers with that as with your original formulation.
Peter Cook
Not, perhaps, a rationality quote per se, but a delightful subversion of a harmful commonplace.
H.L. Mencken
Used to truth? or used to fiction?
Truth.
Correct.
One of the stronger examples of Bayesian updating in fiction, from Buffy the Vampire Slayer season 2, episode 13
Hmm... this isn't exactly a Bayesian update, though.
Bayesian update: you have prior probabilities for theories A, B, C, D; you get new evidence for D, and you use Bayes' rule to decide how to move posterior probability to D.
Oz: you have prior probabilities for theories A, B, and C; you hear a new theory D that you hadn't previously considered, and you recalculate the influence of previous evidence to see how much credence you should give D.
This quote isn't a pure example of the distinction between "getting new evidence" and "considering a new theory", since obviously "my friends believe in D" is also new evidence, but there seems to be more of the latter than the former going on.
It's weird that we don't seem to have a term describing what kind of update the "considering a new theory" process is. It's not something that would ever be done by an ideal Bayesian agent with infinite computing resources, but it's unavoidable for us finite types.
This seems slightly off both in terms of what (the writer intends us to infer) is going on in Oz's head, and what ought to be going on. First, it seems that Oz may have considered vampires or other supernatural explanations, but dismissed them using the absurdity heuristic, or perhaps what we can call the "Masquerade heuristic" - that's where people who live in a fictional world full of actual vampires and demons and whatnot nevertheless heurise as though they lived in ours. (Aside: Is 'heurise' a reasonable verbing of "use heuristics?") Upon hearing that his friends take the theory seriously (plus perhaps whatever context caused them to make these remarks) he reconsiders without the absurdity penalty.
Second, what should be going on is that Oz has theories A, B, C with probabilities adding up to 1-epsilon, where epsilon is the summed probability of "All those explanations which I haven't had time to explicitly consider as theories". Just because he's never explicitly formulated D and formally assigned a probability to it, doesn't mean it doesn't have an implicit one. Once it is picked out of hypothesis space, he can detach it from the other previously unconsidered theories, formally assign an initial probability much smaller than epsilon, and update from there. Of course this is not realistic as a matter of human psychology, but what I'm arguing is that "I never thought of theory X before" does not actually demonstrate that "Oh yeah, theory X makes a lot of sense" is not a Bayesian update. It just means that the updater hasn't had the processing power to fully enumerate the space of available theories.
Does Oz already know that he's a werewolf at this point? That would seem to bring "vampires exist" into the realm of plausible hypotheses.
-- Jerry Avorn, quoted here.
-- David Ricardo
Tyrion Lannister in George R.R. Martin's A Clash of Kings
Most importantly, you are telling the world that anyone saying the same thing is in a risk of losing their tongue, regardless of correctness of the information.
That makes it cheaper for people to argue against the information than to argue for it.
And that increases that chance that people will finally consider him a liar.
Not necessarily. It makes it cheaper for people to argue against whatever slim fraction of the information they can put up as a strawman without risking their own tongues. But it's hard to put up a real argument against an opposition that you can't really even quote.
Not if that strawman is easily blown away by whatever samizdat eventually conveys the full information.
Yvain explains some of the mechanisms better than I could in points 5 through 7 here:
http://squid314.livejournal.com/333353.html
The effectiveness of silencing someone really depends on how common such silencing is for a given regime. For example, if a regime silences all critics (regardless of whether they tell the truth or lie) an individual act of censorship doesn't carry any information about whether the censored info was true or false.
On the other hand, tons of claims are made against the US government every day, and no action is taken against almost all of them. If the government suddenly acted to silence one conspiracy theorist, far more attention would be paid to his claims, and the action would likely backfire.
Michel de Montaigne, Essays, "On habit"
Does it actually help? My usual reactions are "Ha, yeah, I totally do that. Silly human foibles eh?", "Screw you, anonymous proverb author, just because you don't mention what makes this a least-bad option doesn't make it worse", or "Yeah, that's the problem. Do you have a solution?".
Yes. One option is to use it as a memorable trigger- "Oh, I'm making mistake X, like the proverb"- and then amend behavior. (This is one of the reasons why it's worth trying to word proverbs as memorably as possible- rhyming helps quite a bit. If your actions you want to jigger, then do not fail to set a trigger! Sometimes it works better than others.)
A superior option is, upon seeing the maxim, to contemplate it fully, and plan out now how it could be avoided in some way, and then practice that offline.
In general, though, de Montaigne is highlighting the general thrust of Less Wrong. Knowing the ways in which people in general make mistakes is most useful to you if you use that knowledge to prevent yourself from making that mistake, and a general mistake people make is to not do that!
The Doctor - Doctor Who
Nick Bostrom, Superintelligence: the Coming Machine Intelligence Revolution, chap. 2
Looking forward to reading that. This idea is definitely older than this chapter, though; would be interested to know who first made this observation and when.
EDIT:
-- Reducing Long-Term Catastrophic Risks from Artificial Intelligence (in the PDF, not the summary)
Assuming we're the stupidest possible biological species capable of starting a technological civilization seems almost (though not quite) as wrong as asserting we're the smartest such. In both cases we're generalizing from a sample size of one.
For instance, I can imagine a technological civilization that was stupid enough to wipe itself out in a nuclear war, which we've so far managed to avoid; or to destroy its environment far worse than we have. I can also imagine a society that might be able to reach 18th or 19th century levels of tech but couldn't handle calculus or differential geometry.
Well, considering it took us thousands to hundreds of thousands of years (depending on whether you buy that certain, more chronologically recent adaptations played a significant role) to start developing the rudiments of technological civilization, after evolving all the biological assets of intelligence that we have now, I think it's pretty fair to infer that we're not that far above the minimum bar.
A species whose intelligence was far in excess of that necessary to be capable of technological civilization could probably have produced individuals capable of kickstarting the process in every generation once they found themselves in an environment capable of supporting it. By that measure, we as a species proved quite resoundingly lacking.
I agree they're both very wrong, but I don't think the levels of wrongness are as close as you suggest. The former sounds much, much wronger to me. We're much more likely to be close to the dumb end than close to the smart end.
-- Robert Jay Lifton, Thought Reform and the Psychology of Totalism
Bernard de Fontenelle,1686
Found in book review
“Erudition can produce foliage without bearing fruit.” - Georg Christoph Lichtenberg
English proverb
It should also be noted that if one doesn't start wishing for a horse, the probability of obtaining one decreases furtherly.
I know this is meant to be a call to action instead of contemplation, but sometimes I've heard it quoted intending : "Be and adult, stop whishing for very-difficoult-to-obtain things", and this is a statement I don't agree with.
Gurney Halleck in Dune by Frank Herbert
Nick Beckstead, On the Overwhelming Importance of Shaping the Far Future, University of Rutgers, New Brunswick, 2013, p. 19
"Here are the ten major principles of rapid skill acquisition:
The First 20 Hours: How to Learn Anything . . . Fast! by Josh Kaufman.
-- Donna Ball (writing as Donna Boyd), The Passion
That sounds like fun, from a LaVeyan-ish perspective. Fighting and killing are more exciting than singing Kumbaya. Does she just not like raw meat?
Because the consequences of losing are so terrible, people tend to avoid serious fighting if they can. Being hunted - a far more likely state - is decidedly un-fun.
Being hunted is just as likely as hunting. It's just that being hunted is much worse than hunting is good.
Also, being in the state of trying to avoid being hunted is also un-fun.
It's actually from the prologue of a romance novel, and the narrator is a werewolf.
This is factually false. I know the subculture of Americans who are most-passionate about going back to nature, and they do it. The unrealism in their attitude derives not from ignorance of nature, but from being able to go back to nature while under the protection of American law and mores, so that they don't have to band together in tribes for pretection, compete with other tribes for land, and do the whole tribal bickering and conformity thing.
It's all about population density. Primitive life is pretty great if you have low population density--one person per square mile is about right in much of North America. But the population always grows until you have conflict.
Spending 9 hours a day 5 days a week sitting in a cubicle staring at a monitor and typing in numbers is horrible in its own ways, which the author prefers to accomodate and ignore.
(There are no poisonous thorns in North America. And when you see two snakes in "writhing, heaving masses", they're probably mating.)
What exactly was claimed to be a fact and how do you know it's false?
Um. Really? What do you call primitive life, then? Does it include contemporary medicine, for example?
"This testifies to nothing other than the fact that those who recommend the satisfactions of living in harmony with nature have never had to do it." That "fact" is false, and sets up a straw man in the place of the views and preferences of people who know what they're talking about.
In what sense is traveling with modern equipment, vaccinated and raised in an industrial society,
all of which depends crucially on a vast technological economy and society, 'living in harmony with nature'?
They aren't living in harmony with nature because their brief highly sanitized encounters are structured and make use of countless highly unnatural products & tools, and so that is not a strawman.
Me, I'll take air conditioning, indoor plumbing, mosquito control, and antibiotics any day...
I 100% agree with this. As a kid, I used to daydream about going and living by myself in the wilderness, partly because sitting in a classroom all day was so awful. (The other aspect is that I didn't like people much when I was 10). I've compromised by finding a job where I don't have to sit down and type numbers into a computer...at least, not much. Also I like people a lot more now.
I have a sneaking suspicion that's not what the OP meant by "Nature."
-- F. A. Hayek
And perhaps not after that, either.
--Robert Bigelow
Reminds me of Konrad Lorenz' observation that the strength of love in mammalian species is proportional to their ability to inflict harm on each other.
— Charles Sanders Peirce
William Shakespeare, Troilus and Cressida, Act 1, Scene 2. I found the quote in The Happiness Hypothesis where this book's author wrote "Pleasure comes more from making progress toward goals than from achieving them."
Unix was not designed to stop its users from doing stupid things, as that would also stop them from doing clever things.
This design philosophy also seems to explain why the United States seems to have generated some of the most useful innovations in the last century.
I'll be more enthusiastic about encouraging thinking outside the box when there's evidence of any thinking going on inside it.
-- NotEnoughBears
If posting things said on lesswrong or OB or from HPMOR aren't in scope, it seems a little odd things said in HPMOR discussion on a forum run by you that doesn't happen to be those two is.
The idea of the rule is to not have this thread be an echo chamber for LessWrong and Yudkowsky quotes. As a sister site, Overcoming Bias falls under the same logic (though I think, given that the origin of LessWrong in OvecomingBias constantly becomes more distant in time, I wouldn't mind that rule getting relaxed for OvercomingBias more recent entries.)
But either way, I haven't seen that many lesswrong members participate in "hpmor/reddit" or that many hpmor/reddit members participate in lesswrong, so I think it makes sense to NOT ban hpmor/reddit quotes from this thread...
We succeeded in getting rid of the Overcoming Bias ban for several months a couple of years ago. Unfortunately someone reverted to an old version and since then it's stuck. Traditions are a nuisance to change.
If I make this post next month, I'll get rid of the ban. Should that also mean Robin Hanson is fair game?
[Edit] I realized that waiting was silly since I made this month's. It's not clear to me whether or not Hanson quotes should be fair game, though; with the current policy, quoting gems from the comments (like NotEnoughBears's quote) works but we shouldn't get deluged by Hanson quotes.
I don't think Eliezer runs r/HPMOR/ ...
I seems like he does. While I've only gone to the site once the time I did (a few days ago) I saw drama about Eliezer censoring something on the subreddit and observing that this is why fan forums are better when not run by the author himself.
He's a moderator there, but he's not the top moderator, i.e. he acts at the whim of two moderators with more seniority who could remove him at any time.
I doubt this.
OK.
What evidence would cause you to change your mind?
Other authors being booted from forums discussing the stories that they wrote (whether primary or fanific).
Don't need to be a moderator to participate in a forum.
For an example, see user Dorikka.
-- Nietzsche
--Howard Taylor
Our PLANET is mind-numbingly big. If you don’t believe me go to the grand canyon and look down. Did I say go to the grand canyon? Make that HIKE to the grand canyon from yellowstone national park. Still not convinced? ROW across the ocean to china. Bonus points if you can hit Japan without a gps.
So in a twisted sort of sense, the milky-way galaxy is less mind-bogglingly big, because our [or at least my] built-in distance-comprehension hardware shorts out so quickly when attempting to deal with the milky way galaxy we don't really even notice it and so we switch to rigorous numbers which do not have this short-circuiting problem.
I think that shorting out effect is what is meant by "mind-bogglingly".
People have walked from yellowstone to the grand canyon. I couldn't do it myself, but I can read their accounts and understand them.
Earth is big, but our minds are amazed, not boggled. It's with the galaxy that we just start thinking "system error".
An easy way to bridge such distances is to construct a lot of intermediate steps. Take the Milky Way, containing 100 to 400 billion stars (let's take 250 billion). The problem of grasping 250 billion stars going off from just our sun is not too dissimilar from imagining someone with 250 billion dollars, going off from just 1. Lots of intermediate steps: So and so many dollars for a current generation smart phone, so and so many smart phones for, say, a villa, so and so many villas to buy, say, Microsoft. Of course different examples work differently well, but you get the picture, I suppose.
Incidentally, the number of US citizens is higher than the number of stars in the Milky Way in thousands, so if you find yourself a good way of visualizing the former, you can transfer that understanding to the latter, then just unpack the "thousand".
Nothing interesting, not even the size of our Hubble volume, is more than a couple dozen orders of magnitude away, which makes it -- in my opinion -- quite accessible even to our widdle bwains.
So, there are more than 100 billion US citizens?
Thanks for noting, corrected.
A couple dozen orders of magnitude of nearly anything will tend to stretch beyond human borders of intuitive comprehension in either direction.
It seems comprehensibly big. It would take between three and four years to walk around the Earth, walking for a sustainable number of hours at a reasonable pace every day, if you could walk around it in a straight line.
-Dennett's Law of Needy Readers, Daniel Dennett
This law according to Dennett is an extension of Schank's Law:
-Roger Schank
From a Bayesian point of view, this is as it must be. People have priors and will assess anything new as a diff (of log-odds) from those priors. Even understanding what you are saying, before considering whether to update towards it, is subject to this. You will always be understood as saying whatever interpretation of your words is the least surprising to your audience.
BTW, this is standard in natural language processing (which is what a lot of Schank's AI work was in). When a sentence is ambiguous, choose the least surprising interpretation, the one containing the least information relative to your current knowledge.
The narrower your audience's priors, the more of a struggle it will be for them to hear you; the narrower your priors, the more you will struggle to hear them.
Having shown how Schank's Law is but an instance of Bayesian inference, I trust you will all find it acceptably unsurprising. :)
This does raise the question of how anyone learns anything in the first place. :)
Naturally we go through a period of believing everything we're told when we're kids, and transition to comparing everything we hear to what we've already heard before as we grow up.
(This is an inexact approximation, but in my more cynical moments it strikes me as only very slightly inexact.)
Don't underestimate the power of variations.
When shaping behavior in animals, we start with something the animal does naturally and differentially reward natural variations. Evolution of biological systems also involves differential selection of naturally occurring variations on existing systems. So it's certainly possible to get "something new" out of mere "variants of something [that already existed]".
That said, many cognitive systems do also seem capable of insight, which seems to be a completely different kind of process. Dennett and Schank here seem to be dismissing the very possibility of insight, though I assume they are doing so rhetorically.
What has a baby which does not understand speech "heard before", that it can form variations on? Evolution is fine, but you do need a theory of abiogenesis, or in this case aontogenesis - knowledge-from-nothing-ness, in the vernacular.
Wilkie Collins, Man and Wife, Chapter the Twentieth
“The future is always ideal: The fridge is stocked, the weather clear, the train runs on schedule and meetings end on time. Today, well, stuff happens.”
— Daniel Davies
When we roll our eyes at business school grads, it isn't because we don't believe in measuring anything. It's the same eyeroll that the 10 O'Clock news gets when they report the newest study linking molasses and cancer, which has nothing to do with my lack of belief in studies about cancer.
I thought quite a bit about how to measure whether I'm good at Salsa dancing on a particular night. I haven't found a measurement that's adequete.
I could use a measurement like: "How close do woman dance with me?" If a woman enjoys dancing with me she's likely to dance closer than if she doesn't. If I'm however measure my dancing skills on that variable I'm likely to dance with some woman in a way that to close for them and makes them uncomfortable.
I could use a metric just as counting how often a woman asks for my name. If I'm however using that metric I probably won't be the first to ask for a name to increase the chances that the woman asks on her own.
If I'm using a metric such as being asked by woman to dance, I'm less likely to ask on my own.
If I would hand a woman a sheet after a dance to rate my dancing, I would probably be seen as strange.
The average business school grad probably isn't doing very much Quantified Self on his own life. He doesn't know much about actually measuring what he cares about.
Women are not going to enjoy dancing with me more when I try to intellectual control their enjoyment by having a tight feedback loop about some proxy variable that I use to measure their enjoyment. It just doesn't work that way.
On the other hand, if I'm empathic, if I'm in a happy mood and get outside of my head I'm more likely to have success in making woman enjoy dancing with me.
The idea that being in your head and being focused on specific measurements is the only way to care is just flawed.
John McCarthy, adapted a line by T.H. Huxley
I'm fine with this quote as long as the conclusion is not "So let's just do science without any philosophy!"
Because usually that just means doing science with unexamined philosophical assumptions while deluding yourself that you're being objective. This goes badly; e.g., Copenhagen interpretation, neurobabble ("Libet experiment proves you have no free will!").
Your comment, with which I agree, inspired me to post this quote.
--Harry Dresden, Summer Knight, Jim Butcher
Here's the thing about air-travel-related complaints.
Air travel is really unpleasant. Oh sure, it's technologically impressive, but the actual experience is terrible: sitting in a cramped space for hours on end, being in close proximity to so many other people; the pressure changes and the noise; the long, tiring process of arriving for your flight, which often takes longer than the actual flight and is quite stressful; the humiliating and absurd security procedures, which these days look more and more like ways for the government to gratuitously exercise its power...
So we've got this really impressive means of travel, which our society seems to have conspired to make as unpleasant as humanly possible. Ok, maybe it's all excusable and inevitable, just for the sheer amazingness of "ooh, we're FLYING through the AIR and so FAST!" etc. But then, after we pay the airline such impressive amounts of money for this amazing-but-unpleasant convenience, they don't deign to even serve us good drinks?
And what do the drinks have to do with how technologically impressive flight is, anyway? Are the people responsible for the drinks also the people who build, maintain, and fly the planes? What, are the drinks the pilot's responsibility, and he just can't be bothered, what with all that keeping the plane upright that he has to do? Did the Boeing engineer have "serve good drinks" on his to-do list, but just plain didn't get to it, tired as he was from all that "making sure the wings don't fall off" he had to do? No! The people responsible for the drinks had one damn job! And they're doing it badly! And then when people complain, they have the gall to evade responsibility by attempting to take credit for all that amazing science and engineering?!
In short, the quote is analogous to:
"I mean, when you think about it, our society is pretty freaking remarkable. We have computers, and indoor plumbing, and hundreds of channels on cable. Hundreds of millions of man-hours of work and struggle and research, blood, sweat, tears, and lives have gone into the history of all of our modern conveniences, and it has totally revolutionized the face of our planet and societies.
But look anywhere in the world, and I absolutely promise you that you will find someone who, in the face of all that incredible achievement, will be willing to complain about being mugged.
Being mugged, people."
Yeah, "everything is amazing so why are you complaining about this unrelated bad thing" is a fallacy. At this rate, all complaints about everything, ever, are apparently unwarranted.
Well, there's the scenario where one person does both the engineering and the drinks, but only has a limited amount of effort in his job to exert, and he chooses to devote all of that effort to engineering the plane and only a tiny portion of it to ensuring the quality of the drinks. That scenario is obviously absurd.
But if you slightly modify that, person->company and effort->money, that's pretty much what's going on. The company has a limited amount of money to spend, and spending most of it on engineering and almost nothing on drinks has similar dynamics to a single worker who's choosing to spend all his time on engineering and almost nothing on drinks. Even if the company internally contains several workers and the engineer and the drink maintainer are different people.
You're using this remarkable set of interacting or interdependent components of interlinked hypertext documents in a global system of interconnected computer networks powered by a flow of electric charge to whine about a rationality quote! How quaint.
back to the original quote for a bit...Dresden actually complains quite a bit. But after dealing with flaming monkey poo (literally), a white court vampire as a friend, using a cleaning spell to deal with some giant scorpions, and who knows how many dead bodies (some of which were animated)....drinks seem really, really shallow to him. Not to mention he's trying hard not to think too much about how, if he lets his magic the least bit off the leash, it will crash the plane. (something about complicated technology seems to override the rule "cannot accomplish what you don't believe in accomplishing")
Moving back to real life, someone is willing to complain about the drinks while someone else is being mugged.
Furthermore, If the person's REAL complaint is about the unpleasant security measures, cramped seats, and air pressure changes, complaining about the drinks, even if the complaint gets the drinks to improve, will not really optimize much.
Well, my real complaint is about both/all of those things. It is possible to have multiple complaints, you know; and also it is possible to improve more than one thing, ever.
But this generalizes. Someone is willing to complain about being mugged while someone else is being violently assaulted. Someone is willing to complain about being violently assaulted while someone else is imprisoned and tortured. And so on...
There's no law that says we have to find The Worst Problem, devote all our resources to fixing it, and totally ignore every other problem that humanity has while The Worst Problem persists. Such a policy would lead to a rather horrifying world.
As always, a relevant xkcd.
Something similar has been seriously argued here for donations to charity: you should donate all your money to the single charity that would do the most good (unless you're a millionaire who can donate so much money that the charity will reduce the size of the problem to below the size of another problem).
http://lesswrong.com/lw/elo/a_mathematical_explanation_of_why_charity/ http://lesswrong.com/lw/gtm/when_should_you_give_to_multiple_charities/ http://lesswrong.com/lw/aid/heuristics_and_biases_in_charity/
Some of the comments have good arguments against this, however.
That honestly seems like some kind of fallacy, although I can't name it. I mean, sure, take joy in the merely real, that's a good outlook to have; but it's highly analogous to saying something like "Average quality of life has gone up dramatically over the past few centuries, especially for people in major first world countries. You get 50-90 years of extremely good life - eat generally what you want, think and say anything you want, public education; life is incredibly great. But talk to some people, I absolutely promise you that you will find someone who, in the face of all that incredible achievement, will be willing to complain about [starving kid in Africa|environmental pollution|dying peacefully of old age|generally any way in which the world is suboptimal]."
That kind of outlook not only doesn't support any kind of progress, or even just utility maximization, it actively paints the very idea of making things even better as presumptuous and evil. It does not serve for something to be merely awe-inspiring; I want more. I want to not just watch a space shuttle launch (which is pretty cool on its own), but also have a drink that tastes better than any other in the world, with all of my best friends around me, while engaged in a thrilling intellectual conversation about strategy or tactics in the best game ever created. While a wizard turns us all into whales for a day. On a spaceship. A really cool spaceship. I don't just want good; I want the best. And I resent the implication that I'm just ungrateful for what I have. Hell, what would all those people that invested the blood, sweat, and tears to make modern flight possible say if they heard someone suggesting that we should just stick to the status quo because "it's already pretty good, why try to make it better?" I can guarantee they wouldn't agree.
Nonetheless it is important to have a firm grasp on the progress we have already attained. It's easy to go from "we haven't made any real progress" to "real progress is impossible". And so we should acknowledge the achievements we have made to date, while always striving to build on them.
You're right that it would indeed be a mistake to say "things are already great, let's stop here". But then, "things are really awful, so let's get better" doesn't sound quite right either. The attitude I would lean towards, and which I think is compatible with the quote, is "things are already pretty awesome, how could we make them even more awesome?".
The ideal attitude for humans with our peculiar mental architecture probably is one of "everything is amazing, also lets make it better" just because of how happiness ties into productivity. But that would be the correct attitude regardless of the actual state of the world. There is no such thing as an "awesome" world state, just a "more awesome" relation between two such states. Our current state is beyond the wildest dreams of some humans, and hell incarnate in comparison to what humanity could achieve. It is a type error to say "this state is awesome;" you have to say "more awesome" or "less awesome" compared to something else.
Also, such behavior is not compatible with the quote. The quote advocates ignoring real suboptimal sections of the world and instead basking how much better the world is than it used to be. How are you supposed to make the drinks better if you're not even allowed to admit they're not perfect? I could, with minor caveats, get behind "things are great lets make them better" but that's not what the quote said. The quote advocates pretending that we've already achieved perfection.
Sure. But "things are pretty awesome" is faster to say than "our current world is more awesome than most of the worlds that have existed in history".
That's a valid interpretation of the quote, but not the only one. The way I read it, specifically the way it focused on the drinks and the word "complain", it wasn't so much saying that we should pretend that we've already achieve perfection but rather to keep in mind what's worth feeling upset over and what isn't. In other words, don't waste your time complaining about drinks to anyone who could hear, but instead focus your energies on something that you can actually change and which actually matters.
I don't think the comparison is to complaining about very bad things happening elsewhere, it's more like "we've got it so much easier than our forebears, why do people still complain about misspellings on the internet? They should be grateful they have an internet."
One fallacy is that the person who says sort of thing fails to realize that complaining about complaining is still complaining.
Though people have complained about stuff that isn't perfect now even when the imperfect stuff was less imperfect than things had previously been pretty much as far back as we have records, so complaining about that isn't necessarily an instance of the thing being complained about.
Said less obscurely: if we assign the label kvetching to complaining about things even in the face of continual improvement, complaining about kvetching is not necessarily kvetching, since kvetching has continued unabated for generations.
I'm not saying we should settle for anything. Certainly not.
But to forget the awesomeness that already exists is a mistake with consequences. When looking at the big picture, it's important to realize that our current tradjectory is upwards. When planning for something like space travel, it's important to remember that air travel sounded just as crazy a hundred years ago. And when thinking about thinking, it's worth remembering that this same effect will hit whatever awesome thing we think of next.
Sure, I agree with that. But you see, that's not what the quote said. It actually not even related to what the quote said, except in very tenuous manners. The quote condemned people complaining about drinks on an airplane; that was the whole point of mentioning the technology at all. I take issue with the quote as stated, not with every somewhat similar-sounding idea.
I'm certainly cynical, but I see the point complaining about the drinks.
Not all airplane tickets are selled the same price. But basically everybody in the plane get the same share of progress, science, technology and man labour and sweat.
Henceforth how to account for the princing difference ?
The drinks, people.
There's more pressure on a vet to get it right. People say "it was god's will" when granny dies, but they get angry when they lose a cow.
What? Putting down pets or livestock isn't that uncommon, whereas people go way out of their way (I seem to recall Robin Hanson mentioning a two-digit percentage of the US GDP, though I can't seem to find it) to prolong human lives long after they're no longer worth living.
Discworld is set in a time roughly parallel to the late 1700s or early 1800s. Medicine didn't really work, and livestock were significant capital.
Princess Bubblegum in Adventure Time.
Greg Egan, The Eternal Flame, ch. 38
Jack Handey
Stranger-Come-Knocking on why rationalists win life-or-death fights in The Heroes by Joe Abercrombie
-- Denis Healey
Remind's me of this one from Terry Pratchett:
"All you get if you are good at digging holes it's a bigger shovel."
xkcd explains that the absence of evidence is evidence of absence .
Given the amount of drones that fly around these days the question of UFO is settled. There are plenty of objects that fly around which nobody can accurately identify.
Especially when it comes to hobbist drones there are models that really look like flying saucers.
Umm, is it me being sleepy, or did he get P(I picked up a seashell) and P(I'm near the ocean) mixed up in the equation? P(near the ocean | evidence) shouldn't be inversely proportional to P(near the ocean). [ETA: Randall fixed it now.]
Well spotted. Bayes rule is p(A | B) = p(B | A) * p(A) / p(B). This cartoon sees to mixed up p(A) and p(B) just as you note.
Variously attributed.
It's very easy for a rich person to become poor: just give all you have away. It's very hard for a poor person to become rich: almost all of them try, and very few succeed.
If people found, on reflection, that being poor was better than being rich, then they would give their wealth away. We don't observe this.
Therefore I believe being rich is better, even without the benefit of personal experience.
There could be a hedonic treadmill effect: as you get richer, you get more things, but eventually you get used to it and it stops being better than your old life. But you still don't want to give your wealth away, because you have gotten used to having more stuff, and you're not sure that you would get used back to your old way of life the way you get used to your new one.
My superficial knowledge of Seneca and the stoics doesn't allow me to debate the premise fully. It does tell me that the argument that it is better can be debated. That people prefer to be rich does not make it better.
An aside: A rich man that gives away his wealth is not equal to a person that is poor from the start or has lost his riches. The person that gives it away, keeps his connections, earns respect and, generally, is in a position re-earn a fortune.
It's enough for a strong presumption of it being better, pending evidence to the contrary.
Taboo "better": there are preferences as belief, and preferences as revealed in actions. Actions are clearly in favour of being rich.
On the side of beliefs, there are certainly religions and ethical theories that say being poor is better. Personally, I strongly disagree with both this and many other beliefs of all such theories that I know about, not to mention religions.
There are of course ethical systems that say that while being rich may be good, giving away your wealth to charity is better still. Even plain self-interested consequentialism may tell you that you should give your money, perhaps to fight existential risk or to help develop FAI. I certainly agree that there is a tradeoff to be made; I'm only pointing out that in itself, rich is better than poor.
As for the Stoics, I too am not deeply familiar with their philosophy. But it seems to me that any concrete problems generated by wealth, can be rather easily solved in practice by using some of that wealth.
That's not the case for all the people who have been poor and have been rich (see e.g. certain lottery winners).
I guess it largely depend on how one became rich, as well as how one spends the money.
Rich can be worse than poor, knowledge can be worse than ignorance, sickness can be better than health, and death can be better than life. But none of these are the way to bet.
It is also worth considering the relevant causal graph. Wealth --> Happiness allows of such exceptions. But what do they look like in terms of the causal graph Wealth --> Happiness <-- Character? If someone can't handle a sudden accession of money, is it the money or their personal failings that should be blamed? If you see a friend in that situation, do you advise them to get rid of their money or learn to handle it better?
Seth Roberts, ‘Something is better than nothing’, Nutrition, vol. 23, no. 11 (November, 2007), p. 912
One of the more useful class discussions I had consciously started with the opposite. The first question was what was good and useful in the week's reading. We proceeded to criticism, but starting with "is there anything useful here?" made the discussion more useful and positive.
Roberts, naturally, has substantial interest in avoiding any criticism, and the work of people like Ioannides and the eternal life of the publication bias says that if anything, we are insufficiently critical...
I think we're looking at the wrong kind of criticism. Like, the kind of criticism you can make with almost equal ease of results that will and won't turn out to replicate later.
As you know, I agree with you that Roberts is incorrigibly biased, and I liked your earlier post on this. But I think we can be critical in the sense you have in mind, and still try to cultivate the attitude that I take Roberts to be hinting at. Perhaps this is not very clear in the passage I chose to quote though.
From an outside view, how can we distinguish this virtue-of-flawed-research from insiders refraining from criticizing each other for the sake of the reputation of the research field?
Virtue of flawed research insiders won't not criticise the flaws, but they will follow up on them with further studies expanding on a point or fixing a methodology.
The problem that Roberts might be criticising is the sort of thinking that goes: I've made a criticism, now we can forget about the thing.
-- Bill Vaughan, accidentally anticipating the dangers of UFAI in 1969
You can also turn that around.
Suffice to say that AGI is a really big lever.
There's a saying about India, "Whatever you can rightly say about India, the opposite is also true."
"Whatever you can rightly say about India, the opposite is false."
No, no, no. "There exists a statement you can rightly say about India, whose opposite is false."
It works!
Always drive within your competence, at a speed which is appropriate to the circumstances so that you can stop safely in the distance that you can see to be clear.
In driving, as in life.
This advice really only applies in contexts where the risks of failure substantially outweigh the rewards of success. This isn't true in many contexts; if they're approximately equally balanced, it makes sense to attempt to work slightly above your level of competence in order to improve your skill, and if the rewards of success substantially outweigh the risks of failure it makes sense to be even more risk-loving.
I think that you may have misunderstood the point that I was trying to make. I am not advocating excessive caution. Rather, I value self-knowledge and knowledge of the environment and the people you interact with in that environment. Obviously, a certain amount of margin of error should be included in any decision making.
It has been my experience as a driving instructor that most pupils are entirely too cautious especially on faster roads where going too slowly may cause a following vehicle to attempt an unsafe overtaking manoeuvre
JD
Scott Aaronson on optimal philanthropy (quoted somewhat out of context):
Derek Parfit, On What Matters, vol. 2, Oxford, 2011, p. 616
Repost.
Thanks, retracted.
I can't find the original source for this, but I got it from an image floating around Facebook.
Well yes, this is true, but one may reasonably prefer a high steady state over an increase to the current level. It's better to have A in the past and A now, than B in the past and A now. The increase is only to be preferred if it is from B to A+, which does not follow from the admission of error.
Michel de Montaigne, Essays, "On schoolmasters' learning"
Anna Salamon (paraphrase)
Do we allow quotes from lesswrong users and CFAR instructors now?
A policy that disallows Robin Hanson quotes but permits quotes from Anna Salamon would seem peculiar to me.
Whatever the actual rule is, the next time it should be spelled out explicitly.
I believe that one's meant to be a Japanese proverb.
dupe
I view those more as helpful labels for general trends. In many situations, there are pressures pushing against each other, and lending weight to one (by mentioning its general label) can push someone off-balance towards a better position. As they say, everything in moderation. ;)
Tyler Cowen, ‘Caring about the Distant Future: Why it Matters and What it Means’, University of Chicago Law Review, vol. 74, no. 1 (Winter, 2007), p. 10
They are different because when we pack the spaceship with fuel, we control with reasonable certainty whether they make a safe landing or not. As for our millions-of-years descendants, it's very hard to make any statement about us effecting them with >51% confidence (except, "we shouldn't exterminate ourselves").
A lot of what looks like time discounting is really uncertainty discounting.
This is backwards. Everyone in an inertial frame thinks other peoples clocks are slower. Acceleration is what causes the opposite, e.g. turning the spaceship around to come back
You're right that Cowen got it backwards, but you're wrong about this:
Acceleration is not the cause. The reason the astronauts age less is that the path they follow through space-time corresponds to a smaller proper time than the path followed by people who remain on the Earth, and the proper time along a path is what a clock following that path measures. So it's a geometrical fact about the difference between the two paths that causes the asymmetrical aging, not the acceleration of the astronauts.
To make this obvious, it is possible to set up a scenario where another group of astronauts leaves Earth and then returns, accelerating the exact same amount as the first group, but following a path with larger proper time. This second group of astronauts will age more than the first group, even though the accelerations involved were the same.
A lot of elementary presentations of relativity identify acceleration as the relevant factor in twin paradox type cases, but this is wrong (or, more charitably, not entirely right).
I agree in principle, but I have basically no confidence in my ability to figure out what to do to help people in the future. There are two obstacles: random error and bias. Random error, because predicting the future is hard. And bias, because any policy I decide I like could be justified as being good for the future people, and that assertion couldn't be refuted easily. The promise of helping even an enormous number of people in the future amounts to Pascal's Wager, where donating to this or that charity or working on this or that research is like choosing this or that religion; all the possibilities cancel out and I have no reliable guide to what to actually do.
Admittedly this is all "I failed my art" stuff rather than the other way around, but well, it's still true.
Is it some kind of non-sequitur? How is it related to positive discounting?
Probably because some are more real and others are less so.
Instructor in The Guardian comment section
Story I heard from a bookshop clerk about the (sadly deceased) Ian M Banks. He was being interviewed on the South Bank Show and the interviewer asked, in a slightly condescending manner, "why did you start writing science fiction ?", and he replied "I wanted to make sure I was good enough first."
I think the main thing that can be said to defend keeping the Constitution is simply that it is a Schelling point. We need some way to base our system of laws. What system do you choose? There are arguments for many options, and I'm not saying the Constitution is necessarily the best. But due to what you may perhaps call a historical accident, the Constitution is where we are now. This makes it a Schelling point for all the different options for a system to base our laws on.
Very true, although where the USA is now is really not "the Constitution" simpliciter, so much as "the Constitution + all case law."
Why is this a rationality quote?
The constitution can be amended therefore Americans are not bound by decisions made hundreds of years ago. There were 12 amendments passed in the 20th century, the last of which was an amendment that was proposed in 1789 and ratified in 1992.
cough 30 second case cough
Mary
I'd be interested in any specific examples of things AI workers can learn from philosophy at the present time. There has been at least one instance in the past: AI workers in the 1960s should have read Wittgenstein's discussion of games to understand a key problem with building symbolic logic systems that have an atomic symbol correspond to each dictionary word. But I can't think of any other instances.
Timeless decision theory, what I understand of it, bears a remarkable resemblance to Kant's Categorical Imperative. I'm re-reading Kant right now (it's been half a decade), but my primary recollection was that the categorical imperative boiled down to "make decisions not on your own behalf, but as though you decided for all rational agents in your situation."
Some related criticisms of EDT are weirdly reminiscent of Kant's critiques of other moral systems based on predicting the outcome of your actions. "Weirdly reminiscent of" rather than "reinventing" intentionally, but I try not to be too quick to dismiss older thinkers.
Can you elaborate on this? It sounds fascinating. I confess I can't make heads or tails of Wittgenstein.
Wittgenstein, in his discussion of games (specifically, his idea that concepts are delineated by fuzzy "family resemblance", rather than necessary and sufficient membership criteria) basically makes the same points as Eliezer does in these posts.
Representative quotes:
Mitch Hedberg
Penny Arcade on Pascal's Wager.
--David Eagleman, Incognito: The Secret Lives of the Brain, Random House, pp. 221-222
If you could damage wires in a certain way and make the voices forget how to pronounce nouns, eliminate their short-term but not long-term memory, damage their color words, and so on, you would have a solid case for the wires doing internal, functional information-processing in causal arrangements which permitted the final output to be permuted in ways that corresponded to perturbing particular causal nodes. In much the same way, a calculator might be thought to be a radio if you are ignorant of its internals, but if you have a hypothesis that the calculator contains a binary half-adder and you can perturb particular transistors and see wrong answers in a way that matches what the half-adder hypothesis predicts for perturbing that transistor, you have shown the answers are generated internally rather than externally. In a world where we can directly monitor a cat's thalamus and reconstruct part of its visual processing field, the radio hypothesis is not just privileging a hypothesis without evidence, it is frantically clinging to a hypothesis with strong contrary evidence in denial of a hypothesis with detailed confirming evidence.
(I don't think the cat experiments are very conclusive here. As far as I know, the functions that have been identified in the early visual system are things like edge detection and motion detection. But such functions are used for video compression. So not only could a radio set perform them in principle, an ordinary digital TV set already does.)
I don't think this is quite where the analogy was. The brain's information-processing features you describe seem to be analogous to the radio's volume and clarity... it seems Eagleman was trying to compare the radio's content not to the brain's content, but to consciousness or something. At least, that's the best steelmanning attempt I've got.
This isn't an ancient pre-scientific text; it was written in 2011. I completely disagree with the claim that:
There's also nothing in our current science that rules out a teapot orbiting the sun. That does not mean a hypothesis with no evidence for it should be elevated to the level of serious discussion.
There is no reason to think the brain could possibly be receiving "marching orders" from elsewhere, and we absolutely should discard this concept and rule firmly against it. And the same goes for any other equally unfounded ideas that this is an allegory for.
No, because there is an infinity of ideas you could consider. You must wait until evidence weighs sufficiently in favor of some one idea to elevate it above the others, before considering it at all.
Some of the things you would discover would include that in some locations the voices don't show up. Investigating that, you would find that deep in caves they were gone. If you had access to the materials radios are made from, you would discover that in a metal box the voices don't show up. You would infer from this that the voices are coming from outside and are somehow picked up by the box. You might also discover by putting pieces of radios together differently that you could get your own voice to come out of the speaker by hooking up two speakers in series with the power source.
My point is that you would learn a lot more about what is really going on then this long quote suggests.
Stanislaw Lem, The Futurological Congress (1971)