Rationality Quotes: April 2011
You all know the rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (384)
Richard Dawkins, The Selfish Gene.
(cf. Disguised Queries.)
-- Razib Khan
I think Donald Robert Perry said it more succinctly:
Proverbs 9:7-9
Provided your rebuke is sound.
I registered here just to upvote this. As someone who attends a University where this sort of thing is RAMPANT, thanks you for the post.
It would also be fair to say that being intellectual can often be a dampener of conversation. I say this to emphasize that the problem isn't statistics or probabilistic thinking - but rather forcing rigour in general, particularly when in the form of challenging what other people say.
I usually use the word "intellectual" to refer to someone who talks about ideas, not necessarily in an intelligent way.
— Grossman's Law
Is there a law that states that all simple problems have complex, hard to understand answers? Moravec's paradox sort of covers it but it seems that principle should have its own label.
A fable:
The original source of this fable seems to be lost to time. This version was written by Idries Shah.
We are built to be effective animals, not happy ones.
-Robert Wright, The Moral Animal
— David Foster Wallace, The Pale King
– Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid
-Steven Pinker
On perseverance:
-- Robert Strauss
(Although the reference I found doesn't say which Robert Strauss it was)
I think it goes well with the article Make an Extraordinary Effort.
I kind of feel like a scenario is not a great starting point for talking about perseverance when it's likely to result in your immediately getting your arms ripped off.
There are times when it's important to persevere, and times when it's important to know what not to try in the first place.
And there are times when you don't get to choose whether or not you wrestle the gorilla.
Penn Jellete
Upvoted. Also, Jillette.
Paul Graham "What I've learned from Hacker News"
-- Richard Feynman
(I don't think he originally meant this in the context of overcoming cognitive bias, but it seems to apply well to that too.)
I think it was originally meant in the context of joy in the merely real.
-Nobilis RPG 3rd edition
...that was written by a Less Wrong reader. Or if not, someone who independently reinvented things to well past the point where I want to talk to them. Do you know the author?
The author of most of the Nobilis work is Jenna K. Moran. I'm unsure if this remark is independent of LW or not. The Third Edition (where that quote is from) was published this year, so it is possible that LW influenced it.
Heh, I clicked the link to see when she took over Nobilis from Rebecca Borgstrom, only to find that she took over more than that from her.
Edit: Also, serious memetic hazard warning with regard to her fiction blog, which is linked from the article.
I'm not sure it's a memetic hazard, but this post is one of the most Hofstadterian things outside of Hofstadter
Until this moment, I had always assumed that Eliezer had read 100% of all fiction.
Or just someone else who read Pearl, no?
Hasn't using DAGs to talk about causality long been a staple of the philosophy and computer science of causation? The logical positivist philosopher Hans Reichenbach used directed acyclic graphs to depict causal relationships between events in his book The Direction of Time (1956). (See, e.g., p. 37.)
A little searching online also turned up this 1977 article in Proc Annu Symp Comput Appl Med Care. From p. 72:
That article came out around the time of Pearl's first papers, and it doesn't cite him. Had his ideas already reached that level of saturation?
ETA: I've looked a little more closely at the 1977 paper, which is entitled "Problems in the Design of Knowledge Bases for Medical Consultation". It appears to completely lack the idea of performing surgery on the DAGs, though I may have missed something. Here is a longer quote from the paper (p. 72):
So, when it comes to demystifying causation, there is still a long distance from merely using DAGs to using DAGs in the particularly insightful way that Pearl does.
Hi, you might want to consider this paper:
http://www.ssc.wisc.edu/soc/class/soc952/Wright/Wright_The%20Method%20of%20Path%20Coefficients.pdf
This paper is remarkable not only because it correctly formalizes causation in linear models using DAGs, but also that it gives a method for connecting causal and observational quantities in a way that's still in use today. (The method itself was proposed in 1923, I believe). Edit: apparently in 1920-21, with earliest known reference apparently dating back to 1918.
Using DAGs for causality certainly predates Pearl. Identifying "randomization on X" with "dividing by P(x | pa(x))" might be implicit in fairly old papers also. Again, this idea predates Pearl.
There's always more to the story than one insightful book.
Good find, thanks. The handwritten equations are especially nice.
Ilya, it looks you're the perfect person to write an introductory LW post about causal graphs. We don't have any good intro to the topic showing why it is important and non-obvious (e.g. the smoking/tar/cancer example). I'm willing to read drafts, but given your credentials I think it's not necessary :-)
The point is that it's not commonly internalized to the point where someone will correctly use DAG as a synonym for "universe".
Synonym? Not just 'capable of being used to perfectly represent', but an actual literal synonym? That's a remarkable claim. I'm not saying I outright don't believe it but it is something I would want to see explained in detail first.
Would reading Pearl (competently) be sufficient to make someone use the term DAG correctly in that sense?
All that I see in the quote is that the DAG is taken to determine what happens to you in some unanalyzed sense. You often hear similar statements saying that the cold equations of physics determine your fate, but the speaker is not necessarily thinking of "equations of physics" as synonymous with "universe".
Seriously, she seems pretty awesome. link to Johns Hopkins profile
The memes are getting out there! (Hopefully.)
No, hopefully they were re-discovered. We can improve our publicity skills, but we can't make ideas easier to independantly re-invent.
Really? If meme Z is the result of meme X and Y colliding, then it seems like spreading X and Y makes it easier to independently re-invent Z.
-- PartiallyClips, "Windmill"
Nancy Lebovitz came across this too.
Well, I guess that's information about how many people click links and upvote the comments that contained them based on the quality of the linked content.
Not to argue that transcribing the text of the comic isn't valuable (I do actually appreciate it), but it's also information about how many people go back and vote on comments from posts imported from OB.
I thought the correct response should be "Is the thing in fact a giant or a windmill?" Rather than considering which way our maps should be biased, what's the actual territory?
I do tech support, and often get responses like "I think so," and I usually respond with "Let's find out."
Giant/windmill differentiation is not a zero-cost operation.
In the "evil giant vs windmill" question, the prior probability of it being an evil giant is vanishingly close to zero, and the prior probability of it being a windmill is pretty much one minus the chance that it's an evil giant. Spending effort discovering the actual territory when every map ever shows it's a windmill sounds like a waste of effort.
What about a chunk of probability for the case of where it's neither giant nor windmill?
Very few things barring the evil giant have the ability to imitate a windmill. I did leave some wiggle room with
because I wished to allow for the chance it may be a bloody great mimic.
A missile silo disguised as a windmill? A helicopter in an unfortunate position? An odd and inefficient form of rotating radar antenna? A shuttle in launch position? (if one squints, they might think it's a broken windmill with the vanes having fallen off or something)
These are all just off the top of my head. Remember, if we're talking about someone who tends to, when they see a windmill, be unsure whether it's a windmill or an evil giant, there's probably a reasonable chance that they tend to get confused by other objects too, right? :)
You are right! Even I, firmly settled in the fourth camp, was tricked by the false dichotomy of windmill and evil giant.
To be fair, there's also the possibility that someone disguised a windmill as an evil giant. ;)
A good giant?
Sure, but I wouldn't give a "good giant" really any more probability than an "evil giant". Both fall into the "completely negligible" hole. :)
Though, as we all know, if we do find one, the correct action to take is to climb up so that one can stand on its shoulders. :)
I thought we were listing anything at least as plausible as the evil giant hypothesis. I have no information as the morality distribution of giants in general so I use maximum entropy and assign 'evil giant' and 'good giant' equal probability.
Given complexity of value, 'evil giant' and 'good giant' should not be weighted equally; if we have no specific information about the morality distribution of giants, then as with any optimization process, 'good' is a much, much smaller target than 'evil' (if we're including apparently-human-hostile indifference).
Unless we believe them to be evolutionarily close to humans, or to have evolved under some selection pressures similar to those that produced morality, etc., in which we can do a bit better than a complexity prior for moral motivations.
(For more on this, check out my new blog, Overcoming Giants.)
Or, possibly, a great big fan! In fact with some (unlikely) designs it would be impossible to tell whether it was a fan or a windmill without knowledge of what is on the other end of the connected power lines.
Do you consider yourself "objective and wise"?
I'd consider myself puzzled. Unidientified object, is it a threat, a potential asset, some kind of Black Swan? Might need to do something even without positive identification. Will probably need to do something to get a better read on the thing.
And then there's the fact that we are giving much more consideration to the existence of evil giants than to the existence of good giants.
That is truly incredible, I regret only that I have but one upvote to give.
Best quote I've seen in a long time!
-- genesplicer on Something Awful Forums, via
I wonder if the default price was more like $10.
Wow, anchoring! That one didn't even occur to me!
Note to self: do not buy stuff from Nancy Lebovitz.
Better yet, don't go gaga. And use anchoring to your advantage - before haggling, talk about something you got for free.
Story kind of bothers me. Yeah, you can get someone to pretend not to believe something by offering a fiscal reward, but that doesn't prove anything.
If I were a geologist and correctly identified the crystal as the rare and valuable mineral unobtainite which I had been desperately seeking samples of, but Tony stubbornly insisted it was quartz - and if Tony then told me it was $150 if it was unobtainite but $15 if it was quartz - I'd call it quartz too if it meant I could get my sample for cheaper. So what?
I think the interesting part of the story is that it caused the power crystal dude to shut up about power crystals when he'd previously evinced interest in telling everyone about them. I don't think you could get the same effect for $135 from a lot of, say, missionaries.
Part of me wants to say that it was foolish of Tony to take so much less money than he could have gotten simply for getting the guy to profess that it was a piece of quartz rather than a power crystal, but I'm not sure I would feel comfortable exploiting a guy's delusions to that degree either.
There's no guarantee the guy would have bought it at all for $150. The impression I get is that this was ultimately a case of belief in belief, Tony knew he couldn't get much more than $15 and just wanted to win the argument.
I doubt he would have bought it for $150, but after making a big deal of its properties as a power crystal, he'd be limited in his leverage to haggle it down; he'd probably have taken it for three times the asking price if not ten.
I thank Tony for not taking the immediately self-benefiting path of profit and instead doing his small part to raise the sanity waterline.
Was the buyer sane enough to realise that it probably wasn't a power crystal, or just sane enough to realise that if he pretended it wasn't a power crystal he'd save $135?
Is that amount of raising-the-sanity waterline worth $135 to Tony?
I would guess it's guilt-avoidance at work here.
(EDIT: your thanks to Tony are still valid though!)
And with that in mind, how would it have affected the sanity waterline if Tony had donated that $135 to an institution that's pursuing the improvement of human rationality?
Look, sometimes you've just got to do things because they're awesome.
But would you feel comfortable with that maxim encoded in an AI's utility function?
For a sufficiently rigorous definition of "awesome", why not?
If its a terminal value then CEV should converge to it.
I think he would have been better off taking the money and donating it to a good charity.
And then the guy walks away trying to prevent himself from bursting out with laughter at the fact that he just managed to get an incredibly good deal on a strong power crystal that Tony, who had clearly not been educated in such things, mistakenly believed was simple quartz.
Meh. Tony ruined that guy's role-playing fun at a Ren Faire. People pretend to believe all kinds of silly stuff at a Ren Faire.
Last year my husband and I went to Ren Faire dressed as monks, pushing our daughter, dressed as a baby dragon, around in a stroller. (We got lots of comments about vows of celibacy.) We bought our daughter a little flower-shaped hair pin when we were there, after asking what would look best on a dragon. What Tony did would have been like the salesperson saying "That's not a dragon."
It is not really a quote, but a good quip from an otherwise lame recent internet discussion:
Matt: Ok, for all of the people responding above who admit to not having a soul, I think this means that it is morally ok for me to do anything I want to you, just as it is morally ok for me to turn off my computer at the end of the day. Some of us do have souls, though.
Igor: Matt - I agree that people who need a belief in souls to understand the difference between killing a person and turning off a computer should just continue to believe in souls.
This is, of course, pretty much the right answer to anyone who asserts that without God, they could just kill anyone they wanted.
Of course, my original comment had nothing to do with god. It had to do with "souls", for lack of a better term as that was the term that was used in the original discussion (suggest reading the original post if you want to know more---basically, as I understand the intent it simply referred to some hypothetical quality that is associated with consciousness that lies outside the realm of what is simulable on a Turing machine). If you think that humans are nothing but Turing machines, why is it morally wrong to kill a person but not morally wrong to turn off a computer? Please give a real answer...either provide an answer that admits that humans cannot be simulated by Turing machines, or else give your answer using only concepts relevant to Turing machines (don't talk about consciousness, qualia, hopes, whatever, unless you can precisely quantify those concepts in the language of Turing machines). And in the second case, your answer should allow me to determine where the moral balance between human and computers lies....would it be morally bad to turn off a primitive AI, for example, with intelligence at the level of a mouse?
Your question has the form:
If A is nothing but B, then why is it X to do Y to A but not to do Y to C which is also nothing but B?
This following question also has this form:
If apple pie is nothing but atoms, why is it safe to eat apple pie but not to eat napalm which is also nothing but atoms?
And here's the general answer to that question: the molecules which make up apple pie are safe to eat, and the molecules which make up napalm are unsafe to eat. This is possible because these are not the same molecules.
Now let's turn to your own question and give a general answer to it: it is morally wrong to shut off the program which makes up a human, but not morally wrong to shut off the programs which are found in an actual computer today. This is possible because these are not the same programs.
At this point I'm sure you will want to ask: what is so special about the program which makes up a human, that it would be morally wrong to shut off the program? And I have no answer for that. Similarly, I couldn't answer you if you asked me why the molecules of apple pie are safe to eat and the those of napalm are not.
As it happens, chemistry and biology have probably advanced to the point at which the question about apple pie can be answered. However, the study of mind/brain is still in its infancy, and as far as I know, we have not advanced to the equivalent point. But this doesn't mean that there isn't an answer.
We haven't figured out how to turn it back on again. Once we do, maybe it will become morally ok to turn people off.
Because people are really annoying, but we need to be able to live with each other.
We need strong inhibitions against killing each other-- there are exceptions (self-defense, war), but it's a big win if we can pretty much trust each other not to be deadly.
We'd be a lot more cautious about turning off computers if they could turn us off in response.
None of this is to deny that turning off a computer is temporary and turning off a human isn't. Note that people are more inhibited about destroying computers (though much less so than about killing people) than they are about turning computers off.
Doesn't general anesthetic count? I thought that was the turning off of the brain. I was completely "out" when I had it administered to me.
if i believed when i turned off my computer it would need to be monitered by a specialist or it might not ever come back on again, i would be hesitant to turn it off as well
And indeed, mainframes & supercomputers are famous for never shutting down or doing so on timespans measured in decades and with intense supervision on the rare occasion that they do.
It certainly doesn't put a halt to brain activity. You might not be aware of anything that's going on while you're under, or remember anything afterwards (although some people do,) but that doesn't mean that your brain isn't doing anything. If you put someone under general anesthetic under an electroencephalogram, you'd register plenty of activity.
Is it sufficient to say that humans are able to consider the question? That humans possess an ability to abstract patterns from experience so as to predict upcoming events, and that exercise of this ability leads to a concept of self as a future agent.
Is it necessary that this model of identity incorporate relationships with peers? I think so but am not sure. Perhaps it is only necessary that the ability to abstract be recursive.
Hmm, I don't happen to find your argument very convincing. I mean, what it does is to pay attention to some aspect of the original mistaken statement, then find another instance sharing that aspect which is transparently ridiculous.
But is this sufficient? You can model the statement "apples and oranges are good fruits" in predicate logic as "for all x, Apple(x) or Orange(x) implies Good(x)" or in propositional logic as "A and O" or even just "Z". But it should really depend on what aspect of the original statement you want to get at. You want a model which captures precisely those aspects you want to work with.
So your various variables actually confused the hell outta me there. I was trying to match them up with the original statement and your reductio example. All the while not really understanding which was relevant to the confusion. It wasn't a pleasant experience :(
It seems to me much simpler to simply answer: "Turing machine-ness has no bearing on moral worth". This I think gets straight to the heart of the matter, and isolates clearly the confusion in the original statement.
Or further guess at the source of the confusion, the person was trying to think along the lines of: "Turing machines, hmm, they look like machines to me, so all Turing machines are just machines, like a sewing machine, or my watch. Hmm, so humans are Turing machines, but by my previous reasoning this implies humans are machines. And hmm, furthermore, machines don't have moral worth... So humans don't have moral worth! OH NOES!!!"
Your argument seems like one of those long math proofs which I can follow step by step but cannot grasp its overall structure or strategy. Needless to say, such proofs aren't usually very intuitively convincing.
(but I could be generalizing from one example here)
No, I was not trying to think along those lines. I must say, I worried in advance that discussing philosophy with people here would be fruitless, but I was lured over by a link, and it seems worse than I feared. In case it isn't clear, I'm perfectly aware what a Turing machine is; incidentally, while I'm not a computer scientist, I am a professional mathematical physicist with a strong interest in computation, so I'm not sitting around saying "OH NOES" while being ignorant of the terms I'm using. I'm trying to highlight one aspect of an issue that appears in many cases: if consciousness (meaning whatever we mean when we say that humans have consciousness) is possible for Turing machines, what are the implications if we do any of the obvious things? (replaying, turning off, etc...) I haven't yet seen any reasonable answer, other than 1) this is too hard for us to work out, but someday perhaps we will understand it (the original answer, and I think a good one in its acknowledgment of ignorance, always a valid answer and a good guide that someone might have thought about things) and 2) some pointless and wrong mocking (your answer, and I think a bad one). edit to add: forgot, of course, to put my current guess as to most likely answer, 3) that consciousness isn't possible for Turing machines.
This is the part where you're going astray, actually. We have no reason to think that human beings are NOT Turing-computable. In other words, human beings almost certainly are Turing machines.
Therefore, consciousness -- whatever we mean when we say that -- is indeed possible for Turing machines.
To refute this proposition, you'd need to present evidence of a human being performing an operation that can't be done by a Turing machine.
Understanding this will help "dissolve" or "un-ask" your question, by removing the incorrect premise (that humans are not Turing machines) that leads you to ask your question.
That is, if you already know that humans are a subset of Turing machines, then it makes no sense to ask what morally justifies treating them differently than the superset, or to try to use this question as a way to justify taking them out of the larger set.
IOW, (the set of humans) is a subset of (the set of turing machines implementing consciousness), which in turn is a proper subset of (the set of turing machines). Obviously, there's a moral issue where the first two subsets are concerned, but not for (the set of turing machines not implementing consciousness).
In addition, there may be some issues as to when and how you're doing the turning off, whether they'll be turned back on, whether consent is involved, etc... but the larger set of "turing machines" is obviously not relevant.
I hope that you actually wanted an answer to your question; if so, this is it.
(In the event you wish to argue for another answer being likely, you'll need to start with some hard evidence that human behavior is NOT being Turing-computable... and that is a tough road to climb. Essentially, you're going to end up in zombie country.)
That's quite easy: I can lift a rock, a Turing machine can't. A Turing machine can only manipulate symbols on a strip of tape, it can't do anything else that's physical.
Your claim that consciousness (whatever we mean when we say that) is possible for Turing machines, rests on the assumption that consciousness is about computation alone, not about computation+some unidentified physical reaction that's absent to pure Turing machines resting in a box on a table.
That consciousness is about computation alone may indeed end up true, but it's as yet unproven.
Can you expand on why you expect human moral intuition to give reasonably clear answers when applied to situations involving conscious machines ?
Another option:
it's morally acceptable to terminate a conscious program if it wants to be terminated
it's morally questionable(wrong, but to lesser degree) to terminate a conscious program against its will if it is also possible to resume execution
it is horribly wrong to turn off a conscious program against its will if it cannot be resumed(murder fits this description currently)
performing other operations on the program that it desires would likely be morally acceptable, unless the changes are socially unacceptable
performing other operations on the program against its will is morally unacceptable to a variable degree (brainwashing fits in this category)
These seem rather intuitive to me, and for the most part I just extrapolated from what it is moral to do to a human. Conscious program refers here to one running on any system, including wetware, such that these apply to humans as well. I should note that I am in favor of euthanasia in many cases, in case that part causes confusion.
This is a fair answer. I disagree with it, but it is fair in the sense that it admits ignorance. The two distinct points of view are that (mine) there is something about human consciousness that cannot be explained within the language of Turing machines and (yours) there is something about human consciousness that we are not currently able to explain in terms of Turing machines. Both people at least admit that consciousness has no explanation currently, and absent future discoveries I don't think there is a sure way to tell which one is right.
I find it hard to fully develop a theory of morality consistent with your point of view. For example, would it be wrong to (given a computer simulation of a human mind) run that simulation through a given painful experience over and over again? Let us assume that the painful experience has happened once...I just ask whether it would be wrong to rerun that experience. After all, it is just repeating the same deterministic actions on the computer, so nothing seems to be wrong about this. Or, for example, if I make a backup copy of such a program, and then allow that backup to run for a short period of time under slightly different stimuli, at which point does that copy acquire an existence of its own, that would make it wrong to delete that copy in favor of the original? I could give many other similar questions, and my point is not that your point of view denies a morality, but rather that I find it hard to develop a full theory of morality that is internally consistent and that matches your assumptions (not that developing a full theory of morality under my assumptions is that much easier).
Among professional scientists and mathematicians, I have encountered both viewpoints: those who hold it obvious to anyone with even the simplest knowledge that Turing machines cannot be conscious, and those who hold that the opposite it true. Mathematicians seem to lean a little more toward the first viewpoint than other disciplines, but it is a mistake to think that a professional, world-class research level, knowledge of physics, neuroscience, mathematics, or computer science necessarily inclines one towards the soulless viewpoint.
I am sceptical of your having a rigorous theory of morality. If you do have one, I am sceptical that it would be undone by accepting the proposition that human consciousness is computable.
I don't have one either, but I also don't have any reason to believe in the human meat-computer performing non-computable operations. I actually believe in God more than I believe in that :)
I agree that such moral questions are difficult - but I don't see how the difficulty of such questions could constitute evidence about whether a program can "be conscious" or "have a soul" (whatever those mean) or be morally relevant (which has the advantage of being less abstract a concept).
You can ask those same questions without mentioning Turing Machines: what if we have a device capable of making a perfect copy of any physical object, down to each individual quark? Is it morally wrong to kill such a copy of a human? Does the answer to that question have any relevance to the question of whether building such a device is physically possible?
To me, it sounds a bit like saying that since our protocol for seating people around a table are meaningless in zero gravity, then people cannot possibly live in zero gravity.
btw, I'm fully aware that I'm not asking original questions or having any truly new thoughts about this problem. I just hoped maybe someone would try to answer these old questions given that they had such confidence in their beliefs.
This website has an entire two-year course of daily readings that precisely identifies which parts are open questions, and which ones are resolved, as well as how to understand why certain of your questions aren't even coherent questions in the first place.
This is why you're in the same position as a creationist who hasn't studied any biology - you need to actually study this, and I don't mean, "skim through looking for stuff to argue with", either.
Because otherwise, you're just going to sit there mocking the answers you get, and asking silly questions like why are there still apes if we evolved from apes... before you move on to arguments about why you shouldn't have to study anything, and that if you can't get a simple answer about evolution then it must be wrong.
However, just as in the evolutionary case, just as in the earth-being-flat case, just as in the sun-going-round-the-world case, the default human intuitions about consciousness and identity are just plain wrong...
And every one of the subjects and questions you're bringing up, has premises rooted in those false intuitions. Until you learn where those intuitions come from, why our particular neural architecture and evolutionary psychology generates them, and how utterly unfounded in physical terms they are, you'll continue to think about consciousness and identity "magically", without even noticing that you're doing it.
This is why, in the world at large, these questions are considered by so many to be open questions -- because to actually grasp the answers requires that you be able to fully reject certain categories of intuition and bias that are hard-wired into human brains
(And which, incidentally, have a large overlap with the categories of intuition that make other supernatural notions so intuitively appealing to most human beings.)
I love this comment. Have a cookie.
Agreed. Constant, have another one on me. Alicorn, it's ironic that the first time I saw this reply pattern was in Yvain's comment to one of your posts.
Why not napalm?
It's greasy and will stain your clothes.
I like Constant's reply, but it's also worth emphasizing that we can't solve scientific problems by interrogating our moral intuitions. The categories we instinctively sort things into are not perfectly aligned with reality.
Suppose we'd evolved in an environment with sophisticated 2011-era artificially intelligent Turing-computable robots--ones that could communicate their needs to humans, remember and reward those who cooperated, and attack those who betrayed them. I think it's likely we'd evolve to instinctively think of them as made of different stuff than anything we could possibly make ourselves, because that would be true for millions of years. We'd evolve to feel moral obligations toward them, to a point, because that would be evolutionarily advantageous, to a point. Once we developed philosophy, we might take this moral feeling as evidence that they're not Turing-computable--after all, we don't have any moral obligations to a mere mass of tape.
Hi Matt, thanks for dropping by. Here is an older comment of mine that tries to directly address what I consider the hardest of your questions: How to distinguish from the outside between two computational processes, one conscious, the other not. I'll copy it here for convenience. Most of the replies to you here can be safely considered Less Wrong consensus opinion, but I am definitely not claiming that about my reply.
I start my answer with a Minsky quote:
I believe with Minsky that consciousness is a very anthropocentric concept, inheriting much of the complexity of its originators. I actually have no problem with an anthropocentric approach to consciousness, so I like the following intuitive "definition": X is conscious if it is not silly to ask "what is it like to be X?". The subtle source of anthropocentrism here, of course, is that it is humans who do the asking. As materialists, we just can't formalize this intuitive definition without mapping specific human brain functions to processes of X. In short, we inherently need human neuroscience. So it is not too surprising that we will not find a nice, clean decision procedure to distinguish between two computational processes, one conscious the other not.
Most probably you are not happy with this anthropocentric approach. Then you will have to distill some clean, mathematically tractable concept from the messy concept of consciousness. If you agree with Hofstadter and Minsky, then you will probably reach something related to self-reflection. This may or may not work, but I believe that you will lose the spirit of the original concept during such a formalization. Your decision procedure will probably give unexpected results for many things: various simple, very unintelligent computer programs, hive minds, and maybe even rooms full of people.
This ends my old comment, and I will just add a footnote related to ethical implications. With HonoreDB, I can in principle imagine a world with cooperating and competing agents, some conscious, others not, but otherwise having similar negotiating power. I believe that the ethical norms emerging in this imagined world would not even mention consciousness. If you want to build an ethical system for humans, you can "arbitrarily" decide that protecting consciousness is a terminal value. Why not? But if you want to build a non-anthropocentric ethical system, you will see that the question of consciousness is orthogonal to its issues.
Hah! Just found in today's NewsThump: We’d be total shits if it wasn’t for Jesus, admit Christians
-Kris Straub, Chainsawsuit artist commentary
I recently posted these in another thread, but I think they're worth putting here to stand on their own:
Terry Pratchett, "Nation"
William T. Powers (CSGNET mailing list, April 2005)
Does that mean one can answer "Do you believe in magic?" with "No, but I believe in the existence of opaque proprietary APIs"?
This one's for you, Clippy:
—Marshall McLuhan
Vi Hart, How To Snakes
Vi Hart is so dang awesome.
"But these two snakes can't talk because this one speaks in parseltongue and that one speaks in Python"
Damn, why didn't I discover those before ...
"Man, it seems like everyone has a triangle these days..."
From a forum signature:
Also Neil Gaiman.
Even my theist girlfriend laughed out loud at that one :-)
I meant to say that I think the theory I was testing has been disproved, or at least dealt a major blow, which is why I'm shifting my thinking towards something a bit different. Mission of being wrong accomplished!
On boldness:
-- Augiedog, Half the Day is Night
(Edit: I should mention that the linked story is MLP fanfic. The MLP fandom may be a memetic hazard; it seems to have taken over my life for the past several days, though I tend to do that with most things, so YMMV. Proceed with caution.)
Joseph Heller (Catch-22)
~ Story, used most famously in David Foster Wallace's Commencement Address at Kenyon College
-- Paul Graham
Okay, that quote has me upvoting and closing my LessWrong browser.
And this just reminded me to check the time and realise i was 40 minutes late for logging into work (cough) LessWrong as memetic hazard!
What exactly would Paul Graham call reading Paul Graham essays online when I should be working?
Perhaps the answer to that question lies in one or more of the following Paul Graham essays:
Disconnecting Distraction
Good and Bad Procrastination
P.S.: Bwahahahaha!
When it comes to learning on the internet (including, as wedrifid mentions, reading Graham's essays, but excluding e.g. porn and celebrity gossip), I'd say It's a lot less harmful and risky than being drunk, and probably helpful in a lot of ways. It's certainly not making huge strides toward accomplishing your life's goals, but it seems like a stretch to compare it to getting drunk.
I think PG's analogy referred to addictiveness, not harmfulness.
Is it bad if you're addicted to good things?
If it's getting in the way of other stuff you want/need to do, then yes. Otherwise probably no.
No, but in this case the addiction makes you worse off because surfing the net is worse than doing productive work.
“In life as in poker, the occasional coup does not necessarily demonstrate skill and superlative performance is not the ability to eliminate chance, but the capacity to deliver good outcomes over and over again. That is how we know Warren Buffett is a skilled investor and Johnny Chan a skilled poker player.” — John Kay, Financial Times
“We are what we repeatedly do. Excellence, then, is not an act, but a habit.”
~ Aristotle
(Courtesy of my dad)
Arthur Rimbaud, 1873
"I can't make myself believe something that I don't believe" —Ricky Gervais, in discussing his atheism
Reminds me of the scene in HPMOR where Harry makes Draco a scientist.
Dupe.
Francis Paget, preface to the 2nd ed. of "The Spirit of Discipline", 1906
http://www.archive.org/details/thespiritofdisc00pageuoft
The book also contains material on accidie (the Introductory Essay and the preface to the seventh edition), which is probably how I came across it.
– M. Spivak: Calculus
--George Spencer Brown in The Laws of Form, 1969.
– Bertrand Russell
Not a big fan of this. Seems like you could replace the word "think" with many different adjectives, and it would sound good or bad depending on whether I think the adjective agrees with what I consider my virtue. For instance, replace "think" with "exercise", and I would like if I'm a regular exerciser, but if I'm not I'd wonder why I would want to waste my life exercising.
Richard D. Janda and Brian D. Joseph, 2003, The Handbook of Historical Linguistics, p. 111.
– Steven Kaas
Douglas Adams
This quote defines my approach to science and philosophy; a phenomenon can be wondrous on its own merit, it need not be magical or extraordinary to have value.
Is this from a particular book, or something he said randomly?
It's from the first Hitchhiker's Guide to the Galaxy book.
Really? What's the context?
Zaphod thinks they're on a mythic quest to find the lost planet Magrathea. They've found a lost planet alright, orbiting twin stars, but Ford still doesn't believe.
Of course, in context, they are in fact orbiting the lost planet of Magrathea.
Well, in true fact, there is no lost planet of Magrathea.
I imagine it is from one of his books but I came across it in the introduction to The God Delusion by Richard Dawkins. Oddly enough the Hitchhiker series is absolutely full of satirical quotes which can be applied to rationality.
-- The Killers in This is Your Life
-- Christopher Hitchens, Letters to a Young Contrarian
– Mencken, quoted in Pinker: How the Mind Works
The north went on forever. Tyrion Lannister knew the maps as well as anyone, but a fortnight on the wild track that passed for the kingsroad up here had brought home the lesson that the map was one thing and the land quite another.
--George R. R. Martin A Game of Thrones
I will repost a quote that I posted many moons ago on OB, if you don't mind. I don't THINK this breaks the rules too badly, since that post didn't get its fair share of karma. Here's the first time: http://lesswrong.com/lw/uj/rationality_quotes_18/nrt
"He knew well that fate and chance never come to the aid of those who replace action with pleas and laments. He who walks conquers the road. Let his legs grow tired and weak on the way - he must crawl on his hands and knees, and then surely, he will see in the night a distant light of hot campfires, and upon approaching, will see a merchants' caravan; and this caravan will surely happen to be going the right way, and there will be a free camel, upon which the traveler will reach his destination. Meanwhile, he who sits on the road and wallows in despair - no matter how much he cries and complains - will evoke no compassion in the soulless rocks. He will die in the desert, his corpse will become meat for foul hyenas, his bones will be buried in hot sand. How many people died prematurely, and only because they didn't love life strongly enough! Hodja Nasreddin considered such a death humiliating for a human being.
"No" - said he to himself and, gritting his teeth, repeated wrathfully: "No! I won't die today! I don't want to die!""
-- Surviving The World
I initially parsed that as meaning something like "we're clearly not getting the mechanics of evolution across, since people in the comics [and by extension writers] are happy to treat it as something that can produce superheroes". But in context it actually seems to mean "let's create some superheroes to demonstrate the efficacy of evolution beyond any reasonable doubt".
Comic exaggeration, sure, and I'm probably supposed to interpret the word "evolution" very loosely if I want to take the quote at all seriously. But in view of the former, I still can't help but think that there's something fundamentally naive about the latter.
I didn't quote the commentary under the comic for a reason.
"Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion."
-Hume, An Inquiry Concerning Human Understanding
Doesn't that mean "An Inquiry Concerning Human Understanding" should be committed to the flames? I didn't notice much numerical or experimental reasoning in it.
The quote is somewhat experimental, but we'd have to ignore its advice to find out if it was correct.
Personally I enjoy illusions - some of them look pretty. I'm keeping them.
-- Alan Perlis
Since I discovered them through SICP, I always liked the 'Perlisims' -- many of his Epigrams in Programming are pretty good. There's a hint of Searle/Chinese Room in this particular quote, but he turns it around by implying that in the end, the symbols are numbers (or that's how I read it).
—Antoine de Saint-Exupéry
A domain-specific interpretation of the same concept:
—Douglas McIlroy
A domain-neutral interpretation of the same concept:
—William of Ockham
This one really needs to have been applied to itself, "short is good" is way better.
(also this was one of EY's quotes in the original rationality quotes set, http://lesswrong.com/lw/mx/rationality_quotes_3/ )
Also, "short is good" would narrow this quotes focus considerably.
Perfection is lack of excess.
"If you choose to follow a religion where, for example, devout Catholics who are trying to be good people are all going to Hell but child molestors go to Heaven (as long as they were "saved" at some point), that's your choice, but it's fucked up. Maybe a God who operates by those rules does exist. If so, fuck Him." --- Bill Zellar's suicide note, in regards to his parents' religion
I love this passage. If a god as described in the Bible did exist, following him would be akin to following Voldemort: fidelity simply because he was powerful. This isn't precisely a rationality quote, but it does have a bit of the morality-independent-of-religion thing. (The rest of the note is beautiful and eloquent as well.)
I think we should keep some sort op separation between "rationality quotes" and "atheism quotes". You can stretch this to be a rationality quote, but it does require a stretch. Just because a quote argues against the existence of a god doesn't make it particularly rational.
There are other similarities too. e.g. Voldemort's human form died and rose again; his (first) death was foretold in prophesy, involved a betrayal (albeit in the opposite direction), and left his followers anxiously awaiting his return; "And these signs shall follow them that believe; ... they shall speak with new tongues; They shall take up serpents..." (Mark 16:17-18); ...
So, who wants to join the First Church of Voldemort?
-- Paul Feyerabend
This one could do with expansion and/or contextualisation. A quick Google only turns up several pages of just the bare quote (including on a National Institue of Health .gov page!) - what was the original source? Anyone?
Well, I deliberately left out the source because I didn't think it would play well in this Peoria of thought -- it's from his book of essays Farewell to Reason. Link to gbooks with some context.
We've had rationality quotes before from C.S. Lewis, G.K. Chesterson, and Jack Chick among others. I don't think people are going to complain because of generic context issues even if Feyerabend did say some pretty silly stuff.
--William T. Vollmann
The 3 downvotes this had when I entered the thread seem rather harsh, considering it could be rephrased as "think like reality." The questionable part is that the universe has a moral order, but a charitable reading of the quote will not demand that it means "a moral order independent of human minds."
There's no such thing.
The moral order is within us.
And we are within the universe! So that all works out nicely.
—Oscar Wilde
Infinite Jest, page 159