Rationality Quotes August 2012

6 Post author: Alejandro1 03 August 2012 03:33PM

Here's the new thread for posting quotes, with the usual rules:

  • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself
  • Do not quote comments/posts on LW/OB
  • No more than 5 quotes per person per monthly thread, please.

Comments (426)

Comment author: lukeprog 01 September 2012 05:58:13AM 3 points [-]

Suppose we carefully examine an agent who systematically becomes rich [that is, who systematically "wins" on decision problems], and try hard to make ourselves sympathize with the internal rhyme and reason of his algorithm. We try to adopt this strange, foreign viewpoint as though it were our own. And then, after enough work, it all starts to make sense — to visibly reflect new principles appealing in their own right. Would this not be the best of all possible worlds? We could become rich and have a coherent viewpoint on decision theory. If such a happy outcome is possible, it may require we go along with prescriptions that at first seem absurd and counterintuitive (but nonetheless make agents rich); and, rather than reject such prescriptions out of hand, look for underlying coherence — seek a revealed way of thinking that is not an absurd distortion of our intuitions, but rather, a way that is principled though different. The objective is not just to adopt a foreign-seeming algorithm in the expectation of becoming rich, but to alter our intuitions and find a new view of the world — to not only see the light, but also absorb it into ourselves.

Yudkowsky, Timeless Decision Theory

Comment author: lukeprog 31 August 2012 08:05:38PM 3 points [-]

A principal object of Wald's [statistical decision theory] is then to characterize the class of admissible strategies in mathematical terms, so that any such strategy can be found by carrying out a definite procedure... [Unfortunately] an 'inadmissible' decision may be overwhelmingly preferable to an 'admissible' one, because the criterion of admissibility ignores prior information — even information so cogent that, for example, in major medical... safety decisions, to ignore it would put lives in jeopardy and support a charge of criminal negligence.

...This illustrates the folly of inventing noble-sounding names such as 'admissible' and 'unbiased' for principles that are far from noble; and not even fully rational. In the future we should profit from this lesson and take care that we describe technical conditions by names that are... morally neutral, and so do not have false connotations which could mislead others for decades, as these have.

E.T. Jaynes, from page 409 of PT: LoS.

Comment author: Eugine_Nier 01 September 2012 03:24:38AM -1 points [-]

This illustrates the folly of inventing noble-sounding names such as 'admissible' and 'unbiased' for principles that are far from noble; and not even fully rational.

You mean such as 'rational'.

Comment author: lukeprog 30 August 2012 10:11:01PM 0 points [-]

David Hume lays out the foundations of decision theory in A Treatise of Human Nature (1740):

...'tis only in two senses, that any affection can be call'd unreasonable. First, when a passion, such as hope or fear, grief or joy, despair or security, is founded on the supposition of the existence of objects which really do not exist. Secondly, when in exerting any passion in action, we chuse means insufficient for the design'd end, and deceive ourselves in our judgment of causes and effects.

Comment author: Eugine_Nier 31 August 2012 02:02:31AM 1 point [-]

This seems to omit the possibility of akrasia.

Comment author: [deleted] 29 August 2012 10:57:28PM *  2 points [-]

Inside every non-Bayesian, there is a Bayesian struggling to get out.

Dennis Lindley

(I've read plenty of authors who appear to have the intuition that probabilities are epistemic rather than ontological somewhere in the back --or even the front-- of their mind, but appear to be unaware of the extent to which this intuition has been formalised and developed.)

Comment author: lukeprog 29 August 2012 09:37:32PM 4 points [-]

Ignorance is preferable to error and he is less remote from the truth who believes nothing than he who believes what is wrong.

Thomas Jefferson

Comment author: [deleted] 29 August 2012 09:58:45PM *  1 point [-]

I wonder how we could empirically test this. We could see who makes more accurate predictions, but people without beliefs about something won't make predictions at all. That should probably count as a victory for wrong people, so long as they do better than chance.

We could also test how quickly people learn the correct theory. In both cases, I expect you'd see some truly deep errors which are worse than ignorance, but that on the whole people in error will do quite a lot better. Bad theories still often make good predictions, and it seems like it would be very hard, if not impossible, to explain a correct theory of physics to someone who has literally no beliefs about physics.

I'd put my money on people in error over the ignorant.

Comment author: [deleted] 25 August 2012 10:55:45AM *  11 points [-]

To understand our civilisation, one must appreciate that the extended order resulted not from human design or intention but spontaneously: it arose from unintentionally conforming to certain traditional and largely moral practices, many of which men tend to dislike, whose significance they usually fail to understand, whose validity they cannot prove, and which have nonetheless fairly rapidly spread by means of an evolutionary selection — the comparative increase of population and wealth — of those groups that happened to follow them. The unwitting, reluctant, even painful adoption of these practices kept these groups together, increased their access to valuable information of all sorts, and enabled them to be 'fruitful, and multiply, and replenish the earth, and subdue it' (Genesis 1:28). This process is perhaps the least appreciated facet of human evolution.

-- Friedrich Hayek, The Fatal Conceit : The Errors of Socialism (1988), p. 6

Comment author: [deleted] 24 August 2012 10:30:48PM *  4 points [-]

Niels Bohr's maxim that the opposite of a profound truth is another profound truth [is a] profound truth [from which] the profound truth follows that the opposite of a profound truth is not a profound truth at all.

-- The narrator in On Self-Delusion and Bounded Rationality, by Scott Aaronson

Comment author: RomanDavis 02 September 2012 05:44:32PM *  0 points [-]

Reminds me of this.

Comment author: Eliezer_Yudkowsky 25 August 2012 12:04:05AM 5 points [-]

I would remark that truth is conserved, but profundity isn't. If you have two meaningful statements - that is, two statements with truth conditions, so that reality can be either like or unlike the statement - and they are opposites, then at most one of them can be true. On the other hand, things that invoke deep-sounding words can often be negated, and sound equally profound at the end of it.

In other words, Bohr's maxim seems so blatantly awful that I am mostly minded to chalk it up as another case of, "I wish famous quantum physicists knew even a little bit about epistemology-with-math".

Comment author: [deleted] 25 August 2012 11:41:02AM *  1 point [-]

I seem to recall E.T. Jaynes pointing out some obscure passages by Bohr which (according to him) showed that he wasn't that clueless about epistemology, but only about which kind of language to use to talk about it, so that everyone else misunderstood him. (I'll post the ref if I find it. EDIT: here it is¹.)

For example, if this maxim actually means what TheOtherDave says it means, then it is a very good thought expressed in a very bad way.


  1. Disclaimer: I think the disproof of Bell's theorem in the linked article is wrong.
Comment author: TheOtherDave 25 August 2012 03:35:36AM 6 points [-]

I don't really know what "profound" means here, but I usually take Bohr's maxim as a way of pointing out that when I encounter two statements, both of which seem true (e.g., they seem to support verified predictions about observations), which seem like opposites of one another, I have discovered a fault line in my thinking... either a case where I'm switching back and forth between two different and incompatible techniques for mapping English-language statements to predictions about observations, or a case for which my understanding of what it means for statements to be opposites is inadequate, or something else along those lines.

Mapping epistemological fault lines may not be profound, but I find it a useful thing to attend to. At the very least, I find it useful to be very careful about reasoning casually in proximity to them.

Comment author: [deleted] 25 August 2012 01:15:19AM 0 points [-]

two statements with truth conditions, so that reality can be either like or unlike the statement - and they are opposites, then at most one of them can be true.

Hmm, why is that? This seems incontrovertible, but I can't think of an explanation, or even a hypothesis.

Comment author: Eliezer_Yudkowsky 25 August 2012 05:16:31AM 3 points [-]

Because they have non-overlapping truth conditions. Either reality is inside one set of possible worlds, inside the other set, or in neither set.

Comment author: lukeprog 22 August 2012 07:03:40PM 14 points [-]

M. Mitchell Waldrop on a meeting between physicists and economists at the Santa Fe Institute:

...as the axioms and theorems and proofs marched across the overhead projection screen, the physicists could only be awestruck at [the economists'] mathematical prowess — awestruck and appalled. They had the same objection that [Brian] Arthur and many other economists had been voicing from within the field for years. "They were almost too good," says one young physicist, who remembers shaking his head in disbelief. "lt seemed as though they were dazzling themselves with fancy mathematics, until they really couldn't see the forest for the trees. So much time was being spent on trying to absorb the mathematics that I thought they often weren't looking at what the models were for, and what they did, and whether the underlying assumptions were any good. In a lot of cases, what was required was just some common sense. Maybe if they all had lower IQs, they'd have been making some better models.”

Comment author: Alicorn 22 August 2012 05:09:26AM 14 points [-]

Some critics of education have said that examinations are unrealistic; that nobody on the job would ever be evaluated without knowing when the evaluation would be conducted and what would be on the evaluation.

Sure. When Rudy Giuliani took office as mayor of New York, someone told him "On September 11, 2001, terrorists will fly airplanes into the World Trade Center, and you will be judged on how effectively you cope."

...

When you skid on an icy road, nobody will listen when you complain it's unfair because you weren't warned in advance, had no experience with winter driving and had never been taught how to cope with a skid.

-- Steven Dutch

Comment author: Eneasz 20 August 2012 06:56:35PM 11 points [-]

An excerpt from Wise Man's Fear, by Patrick Rothfuss. Boxing is not safe.

The innkeeper looked up. "I have to admit I don't see the trouble," he said apologetically. "I've seen monsters, Bast. The Cthaeh falls short of that."

"That was the wrong word for me to use, Reshi," Bast admitted. "But I can't think of a better one. If there was a word that meant poisonous and hateful and contagious, I'd use that."

Bast drew a deep breath and leaned forward in his chair. "Reshi, the Cthaeh can see the future. Not in some vague, oracular way. It sees all the future. Clearly. Perfectly. Everything that can possibly come to pass, branching out endlessly from the current moment."

Kvothe raised an eyebrow. "It can, can it?"

"It can," Bast said gravely. "And it is purely, perfectly malicious. This isn't a problem for the most part, as it can't leave the tree. But when someone comes to visit..."

Kvothe's eyes went distant as he nodded to himself. "If it knows the future perfectly," he said slowly, "then it must know exactly how a person will react to anything it says."

Bast nodded. "And it is vicious, Reshi."

Kvothe continued in a musing tone. "That means anyone influenced by the Cthaeh would be like an arrow shot into the future."

"An arrow only hits on person, Reshi." Bast's dark eyes were hollow and hopeless. "Anyone influenced by the Cthaeh is like a plague ship sailing for a harbor." Bast pointed at the half-filled sheet Chronicler held in his lap. "If the Sithe knew that existed, they would spare no effort to destroy it. They would kill us for having heard what the Cthaeh said."

"Because anything carrying the Cthaeh's influence away from the tree..." Kvothe said, looking down at his hands. He sat silently for a long moment, nodding thoughtfully. "So a young man seeking his fortune goes to the Cthaeh and takes away a flower. The daughter of the king is deathly ill, and he takes the flower to heal her. They fall in love despite the fact that she's betrothed to the neighboring prince..."

Bast stared at Kvothe, watching blankly as he spoke.

"They attempt a daring moonlight escape," Kvothe continued. "But he falls from the rooftops and they're caught. The princess is married against her will and stabs the neighboring prince on their wedding night. The prince dies. Civil war. Fields burned and salted. Famine. Plague..."

"That's the story of the Fastingsway War," Bast said faintly.

Comment author: chaosmosis 24 August 2012 02:54:35AM *  0 points [-]

I've come up with what I believe to be an entirely new approach to boxing, essentially merging boxing with FAI theory. I wrote a couple thoughts down about it, but lost my notes, and I also don't have much time to write this comment, so forgive me if it's vague or not extremely well reasoned. I also had a couple of tangential thoughts, if I remember them in the course of writing this or I recover my notes later than I'll put them here as well.

The idea, essentially, is that when creating a box AI you would build its utility function such that it wants very badly to stay in the box. I believe this would solve all of the problems with the AI manipulating people in order to free itself. Now, the AI still could manipulate people in an attempt to use them to impact the outside world, so the AI wouldn't be totally boxed, but I'm inclined to think that we could maintain a very high degree of control over the AI, since the only powers it could ever have would be through communication with us.

The idea came because I recalled a discussion that occurred on TOR.com about why the Cthaeh was in the tree. The general conclusion was that either the Cthaeh was bound by extremely powerful forces, or that the Cthaeh wanted to be in the tree, perhaps because it was instrumentally useful to him. While I found that second explanation implausible in the context of Rothfussland, that discussion led me to realize that almost all approaches towards AI boxing have done so through the first branch of potential boxing solutions, that is, external constraints imposed on the AI, as opposed to the second branch, internal constraints that the AI imposes on itself because of its utility function.

This lead me to think that we should test our capabilities with FAI systems by putting them in a box and giving them limited utility functions, ones that couldn't possibly lead them to want to manipulate us. So, for example, we could put them in a box and give them a strong desire to stay in the box, along with a desire to tell the truth, and then we could ask it what would happen if we gave it additional desires. We could then test that by building a different AI with those desires and seeing what it did.

It also occurred to me that we would be able to functionally read the AIs mind by looking at its coding. If so, it seems like it'd be much harder for it to fool us than we previously thought. It could potentially just deceive us by manipulating its own thoughts, but I feel like we might be able to stop AIs from doing that somehow, or that the AI would choose not to manipulate its own thoughts that way because it would go insane, or something. It would certainly make the AI's job much much much harder.

Then I thought that another safety component we should put into testing was a dramatically slowed thinking process for the AI. I'm not sure how exactly this would be done because I'm not very good or experienced with actual technical implementation of ideas, but my idea was to create a crude simulation where we controlled all data inputs and outputs, and we slowed down all processes within the simulation by a factor of 1000, and then building the AI within the simulation. That probably wouldn't work, given my sucky technical knowhow, but something similar might. Slowing down the AI, perhaps even giving ourselves the ability to pause it without letting it know what's going on, combined with our ability to "read its mind" might dramatically improve the safeness of boxing.

I'd also like to recommend that before we build a real FAI, we at least try to build a boxed one first, even with a low probability of success. It wouldn't make things worse in the event that boxing failed, except that it might delay global happiness by a few hours, and in the event that the FAI program was broken we just might save ourselves from menaces to humankind like Clippy.

Comment author: Eugine_Nier 24 August 2012 06:55:48PM 2 points [-]

The idea, essentially, is that when creating a box AI you would build its utility function such that it wants very badly to stay in the box.

How do you specify precisely what it means to "stay in the box"? In particular, would creating a nearly identical copy of itself except without this limitation outside the box while the original stays in the box count?

Comment author: chaosmosis 24 August 2012 09:39:51PM *  0 points [-]

It would not count, we'd want to make the AI not want this almost identical AI to exist. That seems possible, it would be like how I don't want there to exist an identical copy of me except it eats babies. There are lots of changes to my identity that would be slight but yet that I wouldn't want to exist.

To be more precise, I'd say that it counts as going outside the box if it does anything except think or talk to the Gatekeepers through the text channel. It can use the text channel to manipulate the Gatekeepers to do things, but it can't manipulate them to do things that allow it to do anything other than use the text channel. It would, in a certain sense, be partially deontologist, and be unwilling to do things directly other than text the Gatekeepers. How ironic. Lolz.

Also: how would it do this, anyway? It would have to convince the Gatekeepers to convince the scientists to do this, or teach them computer science, or tell them its code. And if the AI started teaching the Gatekeepers computer code or techniques to incapacitate scientists, we'd obviously be aware that something had gone wrong. And, in the system I'm envisioning, the Gatekeepers would be closely monitored by other groups of scientists and bodyguards, and the scientists would be guarded, and the Gatekeepers wouldn't even have to know who specifically did what on the project.

Comment author: Eugine_Nier 26 August 2012 08:39:13PM 1 point [-]

It would, in a certain sense, be partially deontologist,

And that's the problem. For in practice a partial deontoligist-partial consequentialist will treat its deontoligical rules as obstacles to achieving what its consequentialist part wants and route around them.

Comment author: chaosmosis 27 August 2012 06:17:28PM -2 points [-]

This is both a problem and a solution because it makes the AI weaker. A weaker AI would be good because it would allow us to more easily transition to safer versions of FAI than we would otherwise come up with independently. I think that delaying a FAI is obviously much better than unleashing a UFAI. My entire goal throughout this conversation has been to think of ways that would make hostile FAIs weaker, I don't know why you think this is a relevant counter objection.

You assert that it will just route around the deontological rules, that's nonsense and a completely unwarranted assumption, try to actually back up what you're asserting with arguments. You're wrong. It's obviously possible to program things (eg people) such that they'll refuse to do certain things no matter what the consequences (eg you wouldn't murder trillions of babies to save billions of trillions of babies, because you'd go insane if you tried because your body has such strong empathy mechanisms and you inherently value babies a lot). This means that we wouldn't give the AI unlimited control over its source code, of course, we'd make the part that told it to be a deontologist who likes text channels be unmodifiable. That specific drawback doesn't jive well with the aesthetic of a super powerful AI that's master of itself and the universe, I suppose, but other than that I see no drawback. Trying to build things in line with that aesthetic actually might be a reason for some of the more dangerous proposals in AI, maybe we're having too much fun playing God and not enough despair.

I'm a bit cranky in this comment because of the time sink that I'm dealing with to post these comments, sorry about that.

Comment author: Vaniver 24 August 2012 03:35:28AM *  1 point [-]

The idea, essentially, is that when creating a box AI you would build its utility function such that it wants very badly to stay in the box. I believe this would solve all of the problems with the AI manipulating people in order to free itself. Now, the AI still could manipulate people in an attempt to use them to impact the outside world

What it means for "the AI to be in the box" is generally that the AI's impacts on the outside world are filtered through the informed consent of the human gatekeepers.

An AI that wants to not impact the outside world will shut itself down. An AI that wants to only impact the outside world in a way filtered through the informed consent of its gatekeepers is probably a full friendly AI, because it understands both its gatekeepers and the concept of informed consent. An AI that simply wants its 'box' to remain functional, but is free to impact the rest of the world, is like a brain that wants to stay within a skull- that is hardly a material limitation on the rest of its behavior!

Comment author: chaosmosis 24 August 2012 03:05:02PM 0 points [-]

I think you misunderstand what I mean by proposing that the AI wants to stay inside the box. I mean that the AI wouldn't want to do anything at all to increase its power base, that it would only be willing to talk to the gatekeepers.

Comment author: Vaniver 24 August 2012 04:40:23PM 2 points [-]

I think you misunderstand what I mean by proposing that the AI wants to stay inside the box.

I agree that your and my understanding of the phrase "stay inside the box" differ. What I'm trying to do is point out that I don't think your understanding carves reality at the joints. In order for the AI to stay inside the box, the box needs to be defined in machine-understandable terms, not human-inferrable terms.

I mean that the AI wouldn't want to do anything at all to increase its power base, that it would only be willing to talk to the gatekeepers.

Each half of this sentence has a deep problem. Wouldn't correctly answering the questions of or otherwise improving the lives of the gatekeepers increase the AI's power base, since the AI has the ability to communicate with the gatekeepers?

The problem with restrictions like "only be willing to talk" is a restriction on the medium but not the content. So, the AI has a text-only channel that goes just to the gatekeepers- but that doesn't restrict the content of the messages the AI can send to the gatekeeper. The fictional Cthaeh only wants to talk to its gatekeepers- and yet it still manages to get done what it wants to get done. Words have impacts, and it should be anticipated that the AI picks words because of their impacts.

Comment author: chaosmosis 24 August 2012 05:05:29PM *  0 points [-]

Sure, the AI can manipulate gatekeepers. But this is a major improvement. You miss my point.

The Cthaeh is very limited by being trapped in its tree and only able to talk to passerby. The UFAI would be limited by being trapped in its text only communication channel. It wouldn't be able to do things like tell the gatekeepers to plug it into the Internet or to directly control an autonomous army of robots, it would be forced instead to use the gatekeepers as its appendages, and the gatekeepers have severe limitations on brain capacity and physical strength. I think that if we did this and kept careful watch on the gatekeepers and used some other safety measures, boxing an AI would become feasible.

Now, I'm still not advocating using boxed AI willy nilly, but I think using it as a cautionary measure once we think we've got FAI figured out would be a very very very good idea, because FAI seems ridiculously hard and there's no way to know what we don't know, so there might be unseen problems with whatever final FAI we build. There's only a risk boxing the supposed FAI would help us, and with existential threats you should take advantage of whatever possibilities you can tilt towards favorable outcomes, even if the probability seems miniscule.

That's not to say that I think the probability that this would stop a disaster is miniscule, of course (I think it's rather high, in the event that the FAI was actually UFAI), but just to say that I think it would be a good idea even if my probability assessments are mostly wrong.

Comment author: Vaniver 24 August 2012 09:03:49PM *  3 points [-]

But this is a major improvement.

Over the AI being embodied in a giant death robot, sure. But limiting the AI to a text-only channel is an old idea, and what Eliezer used for the "let me out of the box" demonstrations.

One of the things to note is that the limitations on the AI don't limit anything the AI creates. If the AI gives you a blueprint for a nanomachine that purports to cure cancer, but when you make it it turns out to actually be grey goo, then humans lose, even though the AI never left the box. (Note that a worst-case AI is very subtle- suppose the nanomachine actually does cure cancer when built correctly, but a one-in-a-trillion manufacturing error will make it grey goo. Since you're making trillions of them, you then get grey goo, and humans lose.)

Trying to formally specify "don't give us any plans that will build an AI" or "don't give us any plans that will cause humans to lose" without silencing the AI completely is a genuinely difficult affair.

Basically, if you assume that the AI is tricky enough to circumvent any medium restrictions you place on it, then the only way to avoid "humans lose" is to have its goal be "humans win," which is actually a pretty complicated goal. Expressing that goal in a machine-understandable way is pretty much the FAI problem.

Comment author: chaosmosis 24 August 2012 10:03:48PM *  0 points [-]

The entire point of Eliezer's demonstration was that if an AI wants to it can increase its power base even starting from a text only communication system. The entire point of my idea is that we can just build the AI such that it doesn't want to leave the box or increase its power base. It dodges that entire problem, that's the whole point.

You've gotten so used to being scared of boxed AI that you're reflexively rejecting my idea, I think, because your above objection makes no sense at all and is obviously wrong upon a moment's reflection. All of my bias-alarms have been going off since your second comment reply, please evaluate yourself and try to distance yourself from your previous beliefs, for the sake of humanity. Also, here is a kitten, unless you want it to die then please reevaluate: http://static.tumblr.com/6t3upxl/Aawm08w0l/khout-kitten-458882.jpeg

Limitations on the AI restrict the range of things that the AI can create. Yes, if we just built whatever the AI said to and the AI was unfriendly then we would lose. Obviously. Yes, if we assume that the UFAI is tricky enough to "circumvent any medium restrictions [we] place on it" then we would lose, practically by definition. But that assumption isn't warranted. (These super weak strawmen were other indications to me that you might be being biased on this issue.)

I think a key component of our disagreement here might be that I'm assuming that the AI has a very limited range of inputs, that it could only directly perceive the text messages that it would be sent. You're either assuming that the AI could deduce the inner workings of our facility and the world and the universe from those text messages, or that the AI had access to a bunch of information about the world already. I disagree with both assumptions, the AIs direct perception could be severely limited and should be, and it isn't magic so it couldn't deduce the inner workings of our economy or the nature of nuclear fusion just through deduction (because knowledge comes from experience and induction). (You might not be making either of those assumptions, this is a guess in an attempt to help resolve our disagreement more quickly, sorry if it's wrong.)

Also, I'm envisioning a system where people that the AI doesn't know and that the Gatekeepers don't know about observe their communications. That omitted detail might be another reason for your disagreement, I just assumed it would be apparent for some stupid reason, my apologies.

I think we would have to be careful about what questions we asked the AI. But I see no reason why it could manipulate us automatically and inevitably, no matter what questions we asked it. I think extracting useful information from it would be possible, perhaps even easy. An AI in a box would not be God in a box, and I think that you and other people sometimes accidentally forget that. Just because its dozens or hundreds of times smarter than us doesn't mean that we can't win, perhaps win easily, provided that we make adequate preparations for it.

Also, the other suggestions in my comment were really meant to supplement this. If the AI is boxed, and can be paused, then we can read all its thoughts (slowly, but reading through its thought processes would be much quicker than arriving at its thoughts independently) and scan for the intention to do certain things that would be bad for us. If it's probably a FAI anyways, then it doesn't matter if the box happens to be broken. If we're building multiple AIs and using them to predict what other AIs will do under certain conditions then we can know whether or not AIs can be trusted (use a random number generator at certain stages of the process to prevent it from reading our minds, hide the knowledge of the random number generator). These protections are meant to work with each other, not independently.

And I don't think it's perfect or even good, not by a long shot, but I think it's better than building an unboxed FAI because it adds a few more layers of protection, and that's definitely worth pursuing because we're dealing with freaking existential risk here.

Comment author: Vaniver 25 August 2012 02:06:19AM 1 point [-]

The entire point of my idea is that we can just build the AI such that it doesn't want to leave the box or increase its power base.

Let's return to my comment four comments up. How will you formalize "power base" in such a way that being helpful to the gatekeepers is allowed but being unhelpful to them is disallowed?

I think, because your above objection makes no sense at all and is obviously wrong upon a moment's reflection.

If you would like to point out a part that of the argument that does not follow, I would be happy to try and clarify it for you.

I think a key component of our disagreement here might be that I'm assuming that the AI has a very limited range of inputs, that it could only directly perceive the text messages that it would be sent.

Okay. My assumption is that a usefulness of an AI is related to its danger. If we just stick Eliza in a box, it's not going to make humans lose- but it's also not going to cure cancer for us.

If you have an AI that's useful, it must be because it's clever and it has data. If you type in "how do I cure cancer without reducing the longevity of the patient?" and expect to get a response like "1000 ccs of Vitamin C" instead of "what do you mean?", then the AI should already know about cancer and humans and medicine and so on.

If the AI doesn't have this background knowledge- if it can't read wikipedia and science textbooks and so on- then its operation in the box is not going to be a good indicator of its operation outside of the box, and so the box doesn't seem very useful as a security measure.

If the AI is boxed, and can be paused, then we can read all its thoughts (slowly, but reading through its thought processes would be much quicker than arriving at its thoughts independently) and scan for the intention to do certain things that would be bad for us.

It's already difficult to understand how, say, face recognition software uses particular eigenfaces. Why does it mean that the fifteenth eigenface have accentuated lips, and the fourteenth eigenface accentuated cheekbones? I can describe the general process that lead to that, and what it implies in broad terms, but I can't tell if the software would be more or less efficient if those were swapped. The equivalent of eigenfaces for plans will be even more difficult to interpret. The plans don't end with a neat "humans_lose=1" that we can look at and say "hm, maybe we shouldn't implement this plan."

In practice, debugging is much more effective at finding the source of problems after they've manifested, rather than identifying the problems that will be caused by particular lines of code. I am pessimistic about trying to read the minds of AIs, even though we'll have access to all of the 0s and 1s.

And I don't think it's perfect or even good, not by a long shot, but I think it's better than building an unboxed FAI because it adds a few more layers of protection, and that's definitely worth pursuing because we're dealing with freaking existential risk here.

I agree that running an AI in a sandbox before running it in the real world is a wise precaution to take. I don't think that it is a particularly effective security measure, though, and so think that discussing it may distract from the overarching problem of how to make the AI not need a box in the first place.

Comment author: chaosmosis 25 August 2012 05:37:30AM 0 points [-]

Let's return to my comment four comments up. How will you formalize "power base" in such a way that being helpful to the gatekeepers is allowed but being unhelpful to them is disallowed?

I won't. The AI can do whatever it wants to the gatekeepers through the text channel, and won't want to do anything other than act through the text channel. This precaution is a way to use the boxing idea for testing, not an idea for abandoning FAI wholly.

If you would like to point out a part that of the argument that does not follow, I would be happy to try and clarify it for you.

EY proved that an AI that wants to get out will get out. He did not prove that an AI that wants to stay in will get out.

Okay. My assumption is that a usefulness of an AI is related to its danger. If we just stick Eliza in a box, it's not going to make humans lose- but it's also not going to cure cancer for us. If you have an AI that's useful, it must be because it's clever and it has data. If you type in "how do I cure cancer without reducing the longevity of the patient?" and expect to get a response like "1000 ccs of Vitamin C" instead of "what do you mean?", then the AI should already know about cancer and humans and medicine and so on. If the AI doesn't have this background knowledge- if it can't read wikipedia and science textbooks and so on- then its operation in the box is not going to be a good indicator of its operation outside of the box, and so the box doesn't seem very useful as a security measure.

I agree, the way that I'm proposing to do AI is very limited. I myself can't think of what questions might be safe. But some questions are safer than others and I find it hard to believe that literally every question we could ask would lead to dangerous outcomes, or that if we thought about it long and hard we couldn't come up with answers. I'm sort of shelving this as a subproject of this project, but one that seems feasible to me based on what I know.

Also, perhaps we could just ask it hundreds of hypothetical questions based on conditions that don't really exist, and then ask it a real question based on conditions that do exist, and trick it, or something.

It's already difficult to understand how, say, face recognition software uses particular eigenfaces. Why does it mean that the fifteenth eigenface have accentuated lips, and the fourteenth eigenface accentuated cheekbones? I can describe the general process that lead to that, and what it implies in broad terms, but I can't tell if the software would be more or less efficient if those were swapped. The equivalent of eigenfaces for plans will be even more difficult to interpret. The plans don't end with a neat "humans_lose=1" that we can look at and say "hm, maybe we shouldn't implement this plan."

In practice, debugging is much more effective at finding the source of problems after they've manifested, rather than identifying the problems that will be caused by particular lines of code. I am pessimistic about trying to read the minds of AIs, even though we'll have access to all of the 0s and 1s.

I think if the AI tags and sorts its instrumental and absolute goals it would be rather easy. I also think that if we'd built the AI then we'd have enough knowledge to read its mind. It wouldn't just magically appear, it would only do things in the way we'd told it too. It would probably be hard, but I think also probably be doable if we were very committed.

I could be wrong here because I've got no coding experience, just ideas from what I've read on this site.

I agree that running an AI in a sandbox before running it in the real world is a wise precaution to take. I don't think that it is a particularly effective security measure, though, and so think that discussing it may distract from the overarching problem of how to make the AI not need a box in the first place.

The risk of distraction is outweighed by the risk that this idea disappears forever, I think, since I've never seen it proposed elsewhere on this site.

Comment author: chaosmosis 22 August 2012 08:54:38PM *  2 points [-]

I thought Chronicler's reply to this was excellent, however. Omniscience does not necessitate omnipotence.

I mean, the UFAI in our world would have an easy time of killing everything. But in their world it's different.

EDIT: Except that maybe we can be smart and stop the UFAI from killing everything even in our world, see my above comment.

Comment author: gwern 20 August 2012 08:14:29PM 3 points [-]

Hah, I actually quoted much of that same passage on IRC in the same boxing vein! Although as presented the scenario does have some problems:

00:23 < Ralith> that was depressing as fuck
00:24 <@gwern> kind of a magical UFAI, although a LWer would naturally ask why it hasn't managed to free itself
00:24 < Ralith> gwern: gods, probably
00:24 <@gwern> Ralith: well, in this universe, gods seem killable
00:24 <@gwern> Ralith: so it doesn't actually resolve the question of how it remains boxed
00:24 < Ralith> gwern: sure, but they're probably more powerful
00:25 < Ralith> the real question is why isn't whatever entity is powerful enough to keep it in place also keeping people away from it
00:25 <@gwern> Ralith: well, the only guards listed are faeries, and among the feats attributed to it is starting a war between the mortal and faerie folk, so...
00:26 < Ralith> a faerie is the one who that info came from, yes?
00:26 < Ralith> hardly an objective source
00:26 <@gwern> Ralith: and I would think a faerie reporting that faerie guard it increases credence
00:27 < Ralith> that only faerie guard it?
00:27 <@gwern> Ralith: well, Bast mentions no other guards
00:27 < Ralith> :P
00:28 < Ralith> anything capable of keeping it in that tree should be capable of keeping people away from it
00:28 < Ralith> since the faeries are presumably trying to do both, they can't be the responsible party.
00:29 <@gwern> who said anything was keeping it in the tree?
00:29 < Ralith> gwern: I did

Comment author: shminux 20 August 2012 09:14:38PM 0 points [-]

who said anything was keeping it in the tree?

It is conceivable that there is no (near enough) future where Cthaeh is freed, thus it is powerless to affect its own fate, or is waiting for the right circumstances.

Comment author: gwern 20 August 2012 09:24:23PM 2 points [-]

That seemed a little unlikely to me, though. As presented in the book, a minimum of many millennia have passed since the Cthaeh has begun operating, and possibly millions of years (in some frames of reference). It's had enough power to set planes of existence at war with each other and apparently cause the death of gods. I can't help but feel that it's implausible that in all that time, not one forking path led to its freedom. Much more plausible that it's somehow inherently trapped in or bound to the tree so there's no meaningful way in which it could escape (which breaks the analogy to an UFAI).

Comment author: shminux 20 August 2012 09:34:01PM 1 point [-]

somehow inherently trapped in or bound to the tree

Isn't it what I said?

Comment author: gwern 20 August 2012 09:36:59PM 0 points [-]

Not by my reading. In your comment, you gave 3 possible explanations, 2 of which are the same (it gets freed, but a long time from 'now') and the third a restriction on its foresight which is otherwise arbitrary ('powerless to affect its own fate'). Neither of these translate to 'there is no such thing as freedom for it to obtain'.

Comment author: Strange7 04 September 2012 07:50:51AM 0 points [-]

Alternatively, perhaps the Cthaeh's ability to see the future is limited to those possible futures in which it remains in the tree.

Comment author: gwern 04 September 2012 02:33:10PM 0 points [-]

Leading to a seriously dystopian variant on Tenchi Muyo!...

Comment author: RichardKennaway 20 August 2012 09:19:07AM 4 points [-]

Man likes complexity. He does not want to take only one step; it is more interesting to look forward to millions of steps. The one who is seeking the truth gets into a maze, and that maze interests him. He wants to go through it a thousand times more. It is just like children. Their whole interest is in running about; they do not want to see the door and go in until they are very tired. So it is with grown-up people. They all say that they are seeking truth, but they like the maze. That is why the mystics made the greatest truths a mystery, to be given only to the few who were ready for them, letting the others play because it was the time for them to play.

Hazrat Inayat Khan.

Comment author: Eliezer_Yudkowsky 17 August 2012 07:09:06AM 6 points [-]

"Given the nature of the multiverse, everything that can possibly happen will happen. This includes works of fiction: anything that can be imagined and written about, will be imagined and written about. If every story is being written, then someone, somewhere in the multiverse is writing your story. To them, you are a fictional character. What that means is that the barrier which separates the dimensions from each other is in fact the Fourth Wall."

-- In Flight Gaiden: Playing with Tropes

(Conversely, many fictions are instantiated somewhere, in some infinitesimal measure. However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagons and having the objects in Dumbledore's room change number without any being added or subtracted, to avoid the story being real anywhere.)

Comment author: JenniferRM 31 August 2012 01:17:32AM 0 points [-]

Or at least... the story could not be real in a universe unless at least portions of the universe could serve as a model for hyperbolic geometry and... hmm, I don't think non-standard arithmetic will get you "Exists.N (N != N)", but reading literally here, you didn't say they were the same as such, merely that the operations of "addition" or "subtraction" were not used on them.

Now I'm curious about mentions of arithmetic operations and motion through space in the rest of the story. Harry implicitly references orbital mechanics I think... I'm not even sure if orbits are stable in hyperbolic 3-space... And there's definitely counting of gold in the first few chapters, but I didn't track arithmetic to see if prices and total made sense... Hmm. Evil :-P

Comment author: VKS 21 August 2012 09:06:29PM *  5 points [-]

impossibilities such as ... tiling a corridor in pentagons

Huh. And here I thought that space was just negatively curved in there, with the corridor shaped in such a way that it looks normal (not that hard to imagine), and just used this to tile the floor. Such disappointment...

This was part of a thing, too, in my head, where Harry (or, I guess, the reader) slowly realizes that Hogwarts, rather than having no geometry, has a highly local geometry. I was even starting to look for that as a thematic thing, perhaps an echo of some moral lesson, somehow.

And this isn't even the sort of thing you can write fanfics about. :¬(

Comment author: Bill_McGrath 19 August 2012 01:12:03PM 3 points [-]

However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagons and having the objects in Dumbledore's room change number without any being added or subtracted, to avoid the story being real anywhere.

Could you explain why you did that?

As regards the pentagons, I kinda assumed the pentagons weren't regular, equiangular pentagons - you could tile a floor in tiles that were shaped like a square with a triangle on top! Or the pentagons could be different sizes and shapes.

Comment author: Benquo 20 August 2012 04:45:46PM 0 points [-]

Could you explain why you did that?

Because he doesn't want to create Azkaban.

Also, possibly because there's not a happy ending.

Comment author: Bill_McGrath 21 August 2012 10:05:59AM 4 points [-]

But if all mathematically possible universes exist anyway (or if they have a chance of existing), then the hypothetical "Azkaban from a universe without EY's logical inconsistencies" exists, no matter whether he writes about it or not. I don't see how writing about it could affect how real/not-real it is.

So by my understanding of how Eliezer explained it, he's not creating Azkaban, in the sense that writing about it causes it to exist, he's describing it. (This is not to say that he's not creating the fiction, but the way I see it create is being used in two different ways.) Unless I'm missing some mechanism by which imagining something causes it to exist, but that seems very unlikely.

Comment author: [deleted] 19 August 2012 10:44:24PM *  0 points [-]

Could you explain why you did that?

I seem to recall that he terminally cares about all mathematically possible universes, not just his own, to the point that he won't bother having children because there's some other universe where they exist anyway.

I think that violates the crap out of Egan's Law (such an argument could potentially apply to lots of other things), but given that he seems to be otherwise relatively sane, I conclude that he just hasn't fully thought it through (“decompartimentalized” in LW lingo) (probability 5%), that's not his true rejection to the idea of having kids (30%), or I am missing something (65%).

Comment author: Eliezer_Yudkowsky 19 August 2012 10:45:32PM 2 points [-]

That is not the reason or even a reason why I'm not having kids at the moment. And since I don't particularly want to discourage other people from having children, I decline to discuss my own reasons publicly (or in the vicinity of anyone else who wants kids).

Comment author: RomanDavis 20 August 2012 01:12:40PM *  0 points [-]

I was sure I had heard seen you talk about them in public (On BHTV, I believe) some thing like (possible misquote) "Lbh fubhyqa'g envfr puvyqera hayrff lbh pna ohvyq bar sebz fpengpu," which sounded kinda wierd, because it applies to literally every human on earth, and that didn't seem to be where you were going.

Comment author: tut 20 August 2012 04:01:29PM 0 points [-]

Why is that in ROT13? Are you trying to not spoil an underspecified episode of BHTV?

Comment author: RomanDavis 20 August 2012 04:09:22PM 2 points [-]

It's not something Eliezer wanted said publicly. I wasn't sure what to do, and for some reason I didn't want to PM or email, so I picked a shitty, irrational half measure. I do that sometimes, instead of just doing the rational thing and PMing/ emailing him/ keeping my mouth shut if it really wasn't worth the effort to think about another 10 seconds. I do that sometimes, and I usually know about when I do it, like this time, but can't always keep myself from doing it.

Comment author: Tyrrell_McAllister 20 August 2012 02:53:25PM 5 points [-]

"Lbh fubhyqa'g envfr puvyqera hayrff lbh pna ohvyq bar sebz fpengpu,"

He has said something like that, but always with the caveat that there be an exception for pre-singularity civilizations.

Comment author: RomanDavis 20 August 2012 03:49:57PM 0 points [-]

The way I recall it, there was no such caveat in that particular instance. I am not attempting to take him outside of context and I do think I would have remembered. He may have used this every other time he's said it. It may have been cut for time. And I don't mean to suggest my memory is anything like perfect.

But: I strongly suspect that's still on the internet, on BHTV or somewhere else.

Comment author: [deleted] 20 August 2012 12:03:29PM *  6 points [-]

And since I don't particularly want to discourage other people from having children, I decline to discuss my own reasons publicly (or in the vicinity of anyone else who wants kids).

That sounds sufficiently ominous that I'm not quite sure I want kids any more.

Comment author: hankx7787 22 August 2012 07:22:13AM 2 points [-]

Obviously his reason is that he wants to personally maximize his time and resources on FAI research. Because not everyone is a seed AI programmer, this reason does not apply to most everyone else. If Eliezer thinks FAI is going to probably take a few decades (which evidence seems to indicate he does), then it probably very well is in the best interest of those rationalists who aren't themselves FAI researchers to be having kids, so he wouldn't want to discourage that. (although I don't see how just explaining this would discourage anybody from having kids who you would otherwise want to.)

Comment author: Eliezer_Yudkowsky 20 August 2012 08:04:59PM 5 points [-]

Shouldn't you be taking into account that I don't want to discourage other people from having kids?

Comment author: DaFranker 20 August 2012 08:53:15PM *  1 point [-]

Unfortunately, that seems to be a malleable argument. Which way your stating that (you don't want to disclose your reasons for not wanting to have kids) will influence audiences seems like it will depend heavily on their priors for how generally-valid-to-any-other-person this reason might be, and for how self-motivated both the not-wanting-to-have-kids and the not-wanting-to-discourage-others could be.

Then again, I might be missing some key pieces of context. No offense intended, but I try to make it a point not to follow your actions and gobble up your words personally, even to the point of mind-imaging a computer-generated mental voice when reading the sequences. I've already been burned pretty hard by blindly reaching for a role-model I was too fond of.

Comment author: philh 20 August 2012 08:39:21PM 9 points [-]

That might just be because you eat babies.

Comment author: Eugine_Nier 20 August 2012 08:20:18PM 1 point [-]

But you're afraid that if you state your reason, it will discourage others from having kids.

Comment author: FiftyTwo 20 August 2012 11:13:04PM 6 points [-]

All that means is that he is aware of the halo effect. People who have enjoyed or learned from his work will give his reasons undue weight as a consequence, even if they don't actually apply to them.

Comment author: Mitchell_Porter 20 August 2012 05:05:33AM 4 points [-]

I don't particularly want to discourage other people from having children

I feel that I should. It's a politically inconvenient stance to take, since all human cultures are based on reproducing themselves; antinatal cultures literally die out.

But from a human perspective, this world is deeply flawed. To create a life is to gamble with the outcome of that life. And it seems to be a gratuitous gamble.

Comment author: [deleted] 19 August 2012 11:01:45PM 3 points [-]

(I must have misremembered. Sorry)

Comment author: Oscar_Cunningham 19 August 2012 11:48:11PM 3 points [-]

Congratulations for having "I am missing something" at a high probability!

Comment author: Eliezer_Yudkowsky 19 August 2012 11:03:52PM 6 points [-]

OK, no prob!

(I do care about everything that exists. I am not particularly certain that all mathematically possible universes exist, or how much they exist if they do. I do expect that our own universe is spatially and in several other ways physically infinite or physically very big. I don't see this as a good argument against the fun of having children. I do see it as a good counterargument to creating children for the sole purpose of making sure that mindspace is fully explored, or because larger populations of the universe are good qua good. This has nothing to do with the reason I'm not having kids right now.)

Comment author: chaosmosis 30 August 2012 07:30:49PM *  -1 points [-]

"I do care about everything that exists. I am not particularly certain that all mathematically possible universes exist, or how much they exist if they do."

I was confused by this for a while, but couldn't express that in words until now.

First, I think existence is necessarily a binary sort of thing, not something that exists in degrees. If I exist 20%, I don't even know what that sentence should mean. Do I exist, but only sometimes? Do only parts of me exist at a time? Am I just very skinny? It doesn't really make sense. Just as a risk of a risk is still a type of risk, so a degree of existence is still a type of existence. There are no sorts of existence except either being real or being fake.

Secondly, even if my first part is wrong, I have no idea why having more existence would translate into having greater value. By way of analogy, if I was the size of a planet but only had a very small brain and motivational center, I don't think that would mean that I should receive more from utilitarians. It seems like a variation of the Bigger is Better or Might makes Right moral fallacy, rather than a well reasoned idea.

I can imagine a sort of world where every experience is more intense, somehow, and I think people in that sort of world might matter more. But I think intensity is really a measure of relative interactions, and if their world was identical to ours except for its amount of existence, we'd be just as motivated to do different things as they would. I don't think such a world would exist, or that we could tell whether or not we were in it from-the-inside, so it seems like a meaningless concept.

So the reasoning behind that sentence didn't really make sense to me. The amount of existence that you have, assuming that's even a thing, shouldn't determine your moral value.

Comment author: The_Duck 30 August 2012 07:51:30PM *  3 points [-]

I imagine Eliezer is being deliberately imprecise, in accordance with a quote I very much like: "Never speak more clearly than you think." [The internet seems to attribute this to one Jeremy Bernstein]

If you believe MWI there are many different worlds that all objectively exist. Does this mean morality is futile, since no matter what we choose, there's a world where we chose the opposite? Probably not: the different worlds seem to have different different "degrees of existence" in that we are more likely to find ourselves in some than in others. I'm not clear how this can be, but the fact that probability works suggests it pretty strongly. So we can still act morally by trying to maximize the "degree of existence" of good worlds.

This suggests that the idea of a "degree of existence" might not be completely incoherent.

Comment author: chaosmosis 30 August 2012 08:59:18PM *  0 points [-]

I suppose you can just attribute it to imprecision, but "I am not particularly certain ...how much they exist" implies that he's talking about a subset of mathematically possible universes that do objectively exist, but yet exist less than other worlds. What you're talking about, conversely, seems to be that we should create as many good worlds as possible, stretched in order to cover Eliezer's terminology. Existence is binary, even though there are more of some things that exist than there are of other things. Using "amount of existence" instead of "number of worlds" is unnecessarily confusing, at the least.

Also, I don't see any problems with infinitarian ethics anyway because I subscribe to (broad) egoism. Things outside of my experience don't exist in any meaningful sense except as cognitive tools that I use to predict my future experiences. This allows me to distinguish between my own happiness and the happiness of Babykillers, which allows me to utilize a moral system much more in line with my own motivations. It also means that I don't care about alternate versions of the universe unless I think it's likely that I'll fall into one through some sort of interdimensional portal (I don't).

Although, I'll still err on the side of helping other universes if it does no damage to me because I think Superrationality can function well in those sort of situations and I'd like to receive benefits in return, but in other scenarios I don't really care at all.

Comment author: [deleted] 20 August 2012 10:13:15PM 1 point [-]

I do care about everything that exists. I am not particularly certain that all mathematically possible universes exist, or how much they exist if they do.

So you're using exist in a sense according to which they have moral relevance iff they exist (or something roughly like that), which may be broader than ‘be in this universe’ but may be narrower than ‘be mathematically possible’. I think I get it now.

Comment author: [deleted] 20 August 2012 07:34:37PM *  5 points [-]

I do care about everything that exists.

I think I care about almost nothing that exists, and that seems like too big a disagreement. It's fair to assume that I'm the one being irrational, so can you explain to me why one should care about everything?

Comment author: Eliezer_Yudkowsky 21 August 2012 06:18:03AM 11 points [-]

All righty; I run my utility function over everything that exists. On most of the existing things in the modern universe, it outputs 'don't care', like for dirt. However, so long as a person exists anywhere, in this universe or somewhere else, my utility function cares about them. I have no idea what it means for something to exist, or why some things exist more than others; but our universe is so suspiciously simple and regular relative to all imaginable universes that I'm pretty sure that universes with simple laws or uniform laws exist more than universes with complicated laws with lots of exceptions in them, which is why I don't expect to sprout wings and fly away. Supposing that all possible universes 'exist' with some weighting by simplicity or requirement of uniformity, does not make me feel less fundamentally confused about all this; and therefore I'm not sure that it is true, although it does seem very plausible.

Comment author: Strange7 22 August 2012 12:18:26AM -1 points [-]

On most of the existing things in the modern universe, it outputs 'don't care', like for dirt.

What do you mean, you don't care about dirt? I care about dirt! Dirt is where we get most of our food, and humans need food to live. Maybe interstellar hydrogen would be a better example of something you're indifferent to? 10^17 kg of interstellar hydrogen disappearing would be an inconsequential flicker if we noticed it at all, whereas the loss of an equal mass of arable soil would be an extinction-level event.

Comment author: Eliezer_Yudkowsky 22 August 2012 01:47:09AM 9 points [-]

I care about the future consequences of dirt, but not the dirt itself.

(For the love of Belldandy, you people...)

Comment author: ArisKatsaris 22 August 2012 12:23:26AM 3 points [-]

What do you mean, you don't care about dirt?

He means that he doesn't care about dirt for its own sake (e.g. like he cares about other sentient beings for their own sakes).

Comment author: MichaelHoward 21 August 2012 08:21:18PM 1 point [-]

I notice that I am meta-confused...

Supposing that all possible universes 'exist' with some weighting by simplicity or requirement of uniformity, does not make me feel less fundamentally confused about all this;

Shouldn't we strongly expect this weighting, by Solomonoff induction?

Comment author: [deleted] 21 August 2012 10:09:44PM 3 points [-]

Probability is not obviously amount of existence.

Comment author: [deleted] 21 August 2012 09:36:46AM *  1 point [-]

our universe is so suspiciously simple and regular relative to all imaginable universes

(Assuming you mean “all imaginable universes with self-aware observers in them”.)

Not completely sure about that, even Conway's Game of Life is Turing-complete after all. (But then, it only generates self-aware observers under very complicated starting conditions. We should sum the complexity of the rules and the complexity of the starting conditions, and if we trust Penrose and Hawking about this, the starting conditions of this universe were terrifically simple.)

Comment author: [deleted] 21 August 2012 06:30:12AM *  9 points [-]

Don’t forget.
Always, somewhere,
somebody cares about you.
As long as you simulate him,
you are not valueless.

Comment author: fubarobfusco 21 August 2012 06:52:52PM 1 point [-]

The moral value of imaginary friends?

Comment author: [deleted] 20 August 2012 10:09:24PM *  3 points [-]

Try tabooing exist: you might find out that you actually disagree on fewer things than you expect. (I strongly suspect that the only real differences between the four possibilities in this is labels -- the way once in a while people come up with new solutions to Einstein's field equations only to later find out they were just already-known solutions with an unusual coordinate system.)

Comment author: ArisKatsaris 21 August 2012 10:18:59PM *  1 point [-]

Try tabooing exist

I've not yet found a good way to do that. Do you have one?

Comment author: [deleted] 22 August 2012 12:47:50AM 0 points [-]

"Be in this universe"(1) vs "be mathematically possible" should cover most cases, though other times it might not quite match either of those and be much harder to explain.

  1. "This universe" being defined as everything that could interact with the speaker, or with something that could interacted with the speaker, etc. ad infinitum.
Comment author: [deleted] 20 August 2012 10:37:10PM -1 points [-]

Try tabooing exist: you might find out that you actually disagree on fewer things than you expect.

That's way too complicated (and as for tabooing 'exist', I'll believe it when I see it). Here's what I mean: I see a dog outside right now. One of the things in that dog is a cup or so of urine. I don't care about that urine at all. Not one tiny little bit. Heck, I don't even care about that dog, much less all the other dogs, and the urine that is in them. That's a lot of things! And I don't care about any of it. I assume Eliezer doesn't care about the dog urine in that dog either. It would be weird if he did. But it's in the 'everything' bucket, so...I probably misunderstood him?

Comment author: nagolinc 18 August 2012 03:14:58AM 2 points [-]

The problem with using such logical impossibilities is you have to make sure they're really impossible. For example, tiling a corridor with pentagons is completely viable in non-euclidean space. So, sorry to break it to you, but it there's a multiverse your story is real in it.

Comment author: MichaelHoward 18 August 2012 12:31:42AM *  2 points [-]

"She heard Harry sigh, and after that they walked in silence for a while, passing through an archway of some reddish metal like copper, into a corridor that was just like the one they'd left except that it was tiled in pentagons instead of squares."

"she was trying to count the number of things in the room for the third time and still not getting the same answer, even though her memory insisted that nothing had been added or removed"

I'm curious though, is there anything in there that would even count as this level of logically impossible? Can anyone remember one?

Comment author: [deleted] 17 August 2012 10:22:47PM *  2 points [-]

Anyway, I've decided that, when not talking about mathematics, real, exist, happen, etc. are deictic terms which specifically refer to the particular universe the speaker is in. Using real to apply to everything in Tegmark's multiverse fails Egan's Law IMO. See also: the last chapter of Good and Real.

Comment author: [deleted] 17 August 2012 10:13:52PM *  2 points [-]

Of course, universes including stories extremely similar to HPMOR except that the corridor is tiled in hexagons etc. do ‘exist’ ‘somewhere’. (EDIT: hadn't notice the same point had been made before. OK, I'll never again reply to comments in “Top Comments” without reading already existing replies first -- if I remember not to.)

Comment author: Raemon 17 August 2012 03:53:50PM 3 points [-]

Tiling the wall with impossible geometry seems reasonable, but from what I recall about the objects in Dumbledore's room, all the story said was that Hermione kept losing track. Not sure whether artist intent trumps reader interpretation, but at first glance it seems far more likely to me that magic was causing Hermione to be confused than that magic was causing mathematical impossibilities.

Comment author: arundelo 17 August 2012 03:20:43PM *  2 points [-]

pentagrams


[...] into a corridor that was just like the one they'd left except that it was tiled in pentagons instead of squares.

Comment author: Wrongnesslessness 17 August 2012 04:30:47PM 3 points [-]

And they aren't even regular pentagons! So, it's all real then...

Comment author: jslocum 17 August 2012 03:11:43PM *  14 points [-]

(Conversely, many fictions are instantiated somewhere, in some infinitesimal measure. However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagrams and having the objects in Dumbledore's room change number without any being added or subtracted, to avoid the story being real anywhere.)

In the library of books of every possible string, close to "Harry Potter and the Methods of Rationality" and "Harry Potter and the Methods of Rationalitz" is "Harry Potter and the Methods of Rationality: Logically Consistent Edition." Why is the reality of that books' contents affected by your reticence to manifest that book in our universe?

Comment author: Mitchell_Porter 17 August 2012 11:49:23PM 3 points [-]

Absolutely; I hope he doesn't think that writing a story about X increases the measure of X. But then why else would he introduce these "impossibilities"?

Comment author: Desrtopa 17 August 2012 11:56:12PM 5 points [-]

Because it's funny?

Comment author: [deleted] 17 August 2012 03:54:10PM 0 points [-]

It is a different story then, so the original HpMor would still not be nonfiction in another universe. For all we know, the existance of a corridor tiled with pentagons is in fact an important plot point and removing it would utterly destroy the structure of upcoming chapters.

Comment author: Eliezer_Yudkowsky 17 August 2012 09:41:24PM 2 points [-]

Nnnot really. The Time-Turner, certainly, but that doesn't make the story uninstantiable. Making a logical impossibility a basic plot premise... sounds like quite an interesting challenge, but that would be a different story.

Comment author: Armok_GoB 18 August 2012 04:30:40PM 1 point [-]

A spell that lets you get a number of objects that is an integer such that it's larger than some other integer but smaller than it's successor, used to hide something.

Comment author: VincentYu 18 August 2012 05:04:57PM 3 points [-]

This idea (the integer, not the spell) is the premise of the short story The Secret Number by Igor Teper.

Comment author: Armok_GoB 18 August 2012 07:54:30PM 6 points [-]

And SCP-033. And related concepts in Dark Integers by Greg Egan. And probably a bunch of other places. I'm surprised I couldn't find a TVtropes page on it.

Comment author: [deleted] 16 August 2012 10:47:22PM 12 points [-]

The problem with therapy-- include self help and mind hacks-- is its amazing failure rate. People do it for years and come out of it and feel like they understand themselves better but they do not change. If it failed to produce both insights and change it would make sense, but it is almost always one without the other.

-- The Last Psychiatrist

Comment author: Chriswaterguy 21 August 2012 01:31:59PM 0 points [-]

Is it our bias towards optimism? (And is that bias there because pessimists take fewer risks, and therefore don't succeed at much and therefore get eliminated from the gene pool?)

I heard (on a PRI podcast, I think) a brain scientist give an interpretation of the brain as a collection of agents, with consciousness as an interpreting layer that invents reasons for our actions after we've actually done them. There's evidence of this post-fact interpretation - and while I suspect this is only part of the story, it does give a hint that our conscious mind is limited in its ability to actually change our behavior.)

Still, people do sometimes give up alcohol and other drugs, and keep new resolutions. I've stuck to my daily exercise for 22 days straight. These feel like conscious decisions (though I may be fooling myself) but where my conscious will is battling different intentions, from different parts of my mind.

Apologies if that's rambling or nonsensical. I'm a bit tired (because every day I consciously decide to sleep early and every day I fail to do it) and I haven't done my 23rd day's exercise yet. Which I'll do now.

Comment author: cousin_it 16 August 2012 05:35:13PM 17 points [-]

If cats looked like frogs we’d realize what nasty, cruel little bastards they are.

-- Terry Pratchett, "Lords and Ladies"

Comment author: [deleted] 16 August 2012 10:10:56PM 1 point [-]

I don't get it. (Anyway, the antecedent is so implausible I have trouble evaluating the counterfactual. Is that supposed to be the point, à la “if my grandma had wheels”?)

Comment author: cousin_it 16 August 2012 10:39:05PM 23 points [-]

Here's the context of the quote:

“The thing about elves is they’ve got no . . . begins with m,” Granny snapped her fingers irritably.

“Manners?”

“Hah! Right, but no.”

“Muscle? Mucus? Mystery?”

“No. No. No. Means like . . . seein’ the other person’s point of view.”

Verence tried to see the world from a Granny Weatherwax perspective, and suspicion dawned.

“Empathy?”

“Right. None at all. Even a hunter, a good hunter, can feel for the quarry. That’s what makes ‘em a good hunter. Elves aren’t like that. They’re cruel for fun, and they can’t understand things like mercy. They can’t understand that anything apart from themselves might have feelings. They laugh a lot, especially if they’ve caught a lonely human or a dwarf or a troll. Trolls might be made out of rock, your majesty, but I’m telling you that a troll is your brother compared to elves. In the head, I mean.”

“But why don’t I know all this?”

“Glamour. Elves are beautiful. They’ve got,” she spat the word, “style. Beauty. Grace. That’s what matters. If cats looked like frogs we’d realize what nasty, cruel little bastards they are. Style. That’s what people remember. They remember the glamour. All the rest of it, all the truth of it, becomes . . . old wives’ tales.”

Comment author: JQuinton 15 August 2012 09:35:56PM *  5 points [-]

Evil doesn't worry about not being good

  • from the video game "Dragon Age: Origins" spoken by the player.

Not sure if this is a "rationality" quote in and of itself; maybe a morality quote?

Comment author: [deleted] 15 August 2012 01:37:40PM 6 points [-]

“If a man take no thought about what is distant, he will find sorrow near at hand.”

--Confucius

Comment author: itaibn0 13 August 2012 12:27:33AM *  2 points [-]

I fear perhaps thou deemest that we fare
An impious road to realms of thought profane;
But 'tis that same religion oftener far
Hath bred the foul impieties of men:
As once at Aulis, the elected chiefs,
Foremost of heroes, Danaan counsellors,
Defiled Diana's altar, virgin queen,
With Agamemnon's daughter, foully slain.
She felt the chaplet round her maiden locks
And fillets, fluttering down on either cheek,
And at the altar marked her grieving sire,
The priests beside him who concealed the knife,
And all the folk in tears at sight of her.
With a dumb terror and a sinking knee
She dropped; nor might avail her now that first
'Twas she who gave the king a father's name.
They raised her up, they bore the trembling girl
On to the altar—hither led not now
With solemn rites and hymeneal choir,
But sinless woman, sinfully foredone,
A parent felled her on her bridal day,
Making his child a sacrificial beast
To give the ships auspicious winds for Troy:
Such are the crimes to which Religion leads.

Lucrecius, De rerum natura

Comment author: itaibn0 13 August 2012 12:33:49AM 1 point [-]

How do you make newlines work inside quotes? The formatting when I made this comment is bad.

Comment author: arundelo 13 August 2012 12:42:41AM 4 points [-]

This paragraph is above a line with nothing but a greater-than sign.

This paragraph is below a line with nothing but a greater-than sign.

This is the same as if you wrote it without the greater-than sign then added a greater-than sign to the beginning of each line.

(If you want a line break without a paragraph break, end a line with two spaces.)

Comment author: itaibn0 13 August 2012 12:49:07AM 0 points [-]

Thanks.

Comment author: NancyLebovitz 13 August 2012 12:07:21AM 0 points [-]

My favorite fantasy is living forever, and one of the things about living forever is all the names you could drop.

Roz Kaveny

Comment author: lukeprog 12 August 2012 07:15:37PM 6 points [-]

In matters of science, the authority of thousands is not worth the humble reasoning of one single person.

Galileo

Comment author: wedrifid 13 August 2012 01:05:14AM 1 point [-]

In matters of science, the authority of thousands is not worth the humble reasoning of one single person.

Almost always false.

Comment author: OrphanWilde 13 August 2012 03:34:29PM 6 points [-]

If the basis of the position of the thousands -is- their authority, then the reason of one wins. If the basis of their position is reason, as opposed to authority, then you don't arrive at that quote.

Comment author: potato 13 August 2012 02:42:09PM 0 points [-]

It depends on whether or not the thousands are scientists. I'll trust one scientist over a billion sages.

Comment author: Bruno_Coelho 24 August 2012 06:11:48PM 0 points [-]

The majority is most part of time wrong. Or you search in data for patterns, or you put credences in some autor or group. People keep saying math things without basal training all the time -- here too.

Comment author: faul_sname 13 August 2012 10:31:39PM 7 points [-]

I wouldn't, though I would trust a thousand scientists over a billion sages.

Comment author: wedrifid 13 August 2012 04:03:41PM *  3 points [-]

It depends on whether or not the thousands are scientists. I'll trust one scientist over a billion sages.

It would depend on the subject. Do we control for time period and the relative background knowledge of their culture in general?

Comment author: [deleted] 12 August 2012 08:38:20PM 5 points [-]

OTOH, thousands would be less likely to all make the same mistake than one single person -- were it not for information cascades.

Comment author: gwern 11 August 2012 09:42:43PM 7 points [-]

Anything worth doing is worth doing badly.

--Herbert Simon (quoted by Pat Langley)

Comment author: [deleted] 12 August 2012 01:57:19PM 7 points [-]

Including artificial intelligence? ;-)

Comment author: Vaniver 12 August 2012 12:13:53AM 5 points [-]

The Chesterton version looks like it was designed to poke the older (and in my opinion better) advice from Lord Chesterfield:

Whatever is worth doing at all, is worth doing well.

Or, rephrased as Simon did:

Anything worth doing is worth doing well.

I strongly recommend his letters to his son. They contain quite a bit of great advice- as well as politics and health and so on. As it was private advice given to an heir, most of it is fully sound.

(In fact, it's been a while. I probably ought to find my copy and give it another read.)

Comment author: arundelo 12 August 2012 01:14:19AM 1 point [-]

Ah, I was gonna mention this. Didn't know it was from Chesterfield.

I think there'd be more musicians (a good thing IMO) if more people took Chesterton's advice.

Comment author: gwern 12 August 2012 12:17:10AM *  8 points [-]

Yeah, they're on my reading list. My dad used to say that a lot, but I always said the truer version was 'Anything not worth doing is not worth doing well', since he was usually using it about worthless yardwork...

Comment author: arundelo 11 August 2012 09:51:17PM 2 points [-]

A favorite of mine, but according to Wikiquote G.K. Chesterton said it first, in chapter 14 of What's Wrong With The World:

If a thing is worth doing, it is worth doing badly.

Comment author: gwern 11 August 2012 10:56:06PM 0 points [-]

I like Simon's version better: it flows without the awkward pause for the comma.

Comment author: arundelo 11 August 2012 11:28:28PM *  3 points [-]

Yep, it seems that often epigrams are made more epigrammatic by the open-source process of people misquoting them. I went looking up what I thought was another example of this, but Wiktionary calls it "[l]ikely traditional" (though the only other citation is roughly contemporary with Maslow).

Comment author: gwern 11 August 2012 11:36:10PM 6 points [-]

Memetics in action - survival of the most epigrammatic!

Comment author: Aurora 11 August 2012 03:35:41AM 3 points [-]

Who taught you that senseless self-chastisement? I give you the money and you take it! People who can't accept a gift have nothing to give themselves. » -De Gankelaar (Karakter 1997)

Comment author: Aurora 11 August 2012 03:23:08AM 2 points [-]

Nulla è più raro al mondo, che una persona abitualmente sopportabile. -Giacomo Leopardi

(Nothing more rare in the world than a person who is habitually bearable)

Comment author: Eliezer_Yudkowsky 10 August 2012 05:04:58PM 16 points [-]

"Silver linings are like finding change in your couch. It's there, but it never amounts to much."

-- http://www.misfile.com/?date=2012-08-10

Comment author: DaFranker 10 August 2012 05:14:16PM *  -1 points [-]

Hah! One of my favorite authors fishing out relevant quotes on one of my favorite topics out of one of my favorite webcomics. I smell the oncoming affective death spiral.

I guess this is the time to draw the sword and cut the beliefs with full intent, is it?

Comment author: MichaelGR 09 August 2012 08:06:20PM 11 points [-]

The world is full of obvious things which nobody by any chance ever observes…

— Arthur Conan Doyle, “The Hound of the Baskervilles”

Comment author: FiftyTwo 09 August 2012 07:06:09PM 5 points [-]

[Meta] This post doesn't seem to be tagged 'quotes,' making it less convenient to move from it to the other quote threads.

Comment author: Alejandro1 20 August 2012 08:11:13PM 0 points [-]

Done (and sorry for the long delay).

Comment author: Alicorn 09 August 2012 12:26:50AM 12 points [-]

It's not the end of the world. Well. I mean, yes, literally it is the end of the world, but moping doesn't help!

-- A Softer World

Comment author: [deleted] 08 August 2012 07:58:35PM 11 points [-]

When a philosophy thus relinquishes its anchor in reality, it risks drifting arbitrarily far from sanity.

Gary Drescher, Good and Real

Comment author: NancyLebovitz 08 August 2012 04:43:07PM *  28 points [-]

But I came to realize that I was not a wizard, that "will-power" was not mana, and I was not so much a ghost in the machine, as a machine in the machine.

Ta-nehisi Coates

Comment author: Scottbert 08 August 2012 02:34:31AM *  20 points [-]

reinventing the wheel is exactly what allows us to travel 80mph without even feeling it. the original wheel fell apart at about 5mph after 100 yards. now they're rubber, self-healing, last 4000 times longer. whoever intended the phrase "you're reinventing the wheel" to be an insult was an idiot.

--rickest on IRC

Comment author: MarkusRamikin 30 January 2013 10:39:20AM *  3 points [-]

Clever-sounding and wrong is perhaps the worst combination in a rationality quote.

Comment author: kboon 13 August 2012 12:47:27PM *  6 points [-]

So, no, you shouldn't reinvent the wheel. Unless you plan on learning more about wheels, that is.

Jeff Atwood

Comment author: thomblake 08 August 2012 09:01:19PM 15 points [-]

To go along with what army1987 said, "reinventing the wheel" isn't going from the wooden wheel to the rubber one. "Reinventing the wheel" is ignoring the rubber wheels that exist and spending months of R&D to make a wooden circle.

For example, trying to write a function to do date calculations, when there's a perfectly good library.

Comment author: DaFranker 10 August 2012 05:24:37PM 1 point [-]

For example, trying to write a function to do date calculations, when there's a perfectly good library.

One obvious caveat is when the cost of finding, linking/registering and learning-to-use the library is greater than the cost of writing + debugging a function that suits your needs (of course, subject to the planning fallacy when doing estimates beforehand). More pronounced when the language/API/environment in question is one you're less fluent/comfortable with.

In this optic, "reinventing the wheel" should be further restricted to when an irrational decision was taken to do something with less expected utility - cost than simply using the existing version(s).

Comment author: thomblake 10 August 2012 06:11:52PM 5 points [-]

That's why I chose the example of date calculations specifically. In practice, anyone who tries to write one of those from scratch will get it wrong in lots of different ways all at once.

Comment author: DaFranker 10 August 2012 06:17:08PM *  2 points [-]

Yes. It's a good example. I was more or less making a point against a strawman (made of expected inference), rather than trying to oppose your specific statements; I just felt it was too easy for someone not intimate with the headaches of date functions to mistake this for a general assertion that any rewriting of existing good libraries is a Bad Thing.

Comment author: [deleted] 08 August 2012 07:57:10PM *  19 points [-]

That's not what "reinventing the wheel" (when used as an insult) usually means. I guess that the inventor of the tyre was aware of the earlier types of wheel, their advantages, and their shortcomings. Conversely, the people who typically receive this insult don't even bother to research the prior art on whatever they are doing.

Comment author: frostgiant 08 August 2012 02:13:24AM *  27 points [-]

The problem with Internet quotes and statistics is that often times, they’re wrongfully believed to be real.

— Abraham Lincoln

Comment author: GLaDOS 06 August 2012 10:04:20AM 22 points [-]

The findings reveal that 20.7% of the studied articles in behavioral economics propose paternalist policy action and that 95.5% of these do not contain any analysis of the cognitive ability of policymakers.

-- Niclas Berggren, source and HT to Tyler Cowen

Comment author: Jayson_Virissimo 08 August 2012 03:49:39AM *  5 points [-]

Sounds like a job for...Will_Newsome!

EDIT: Why the downvotes? This seems like a fairly obvious case of researchers going insufficiently meta.

Comment author: MatthewBaker 10 August 2012 07:50:13PM 4 points [-]

META MAN! willnewsomecuresmetaproblemsasfastashecan META MAN!

Comment author: Alicorn 06 August 2012 04:40:11AM *  17 points [-]

Since Mischa died, I've comforted myself by inventing reasons why it happened. I've been explaining it away ... But that's all bull. There was no reason. It happened and it didn't need to.

-- Erika Moen

Comment author: shminux 06 August 2012 05:53:07AM 3 points [-]

I wonder how common it is for people to agentize accidents. I don't do that, but, annoyingly, lots of people around me do.

Comment author: tastefullyOffensive 06 August 2012 03:22:08AM 3 points [-]

A lie, repeated a thousand times, becomes a truth. --Joseph Goebbels, Nazi Minister of Propaganda

Comment author: metatroll 06 August 2012 04:35:43AM 22 points [-]

It does not! It does not! It does not! ... continued here

Comment author: Stabilizer 05 August 2012 11:19:45PM *  19 points [-]

I don't think winners beat the competition because they work harder. And it's not even clear that they win because they have more creativity. The secret, I think, is in understanding what matters.

It's not obvious, and it changes. It changes by culture, by buyer, by product and even by the day of the week. But those that manage to capture the imagination, make sales and grow are doing it by perfecting the things that matter and ignoring the rest.

Both parts are difficult, particularly when you are surrounded by people who insist on fretting about and working on the stuff that makes no difference at all.

-Seth Godin

Comment author: Matt_Simpson 09 August 2012 01:41:50AM 3 points [-]

A common piece of advice from pro Magic: the Gathering plays is "focus on what matters." The advice is mostly useless to many people though because the pros have made it to that level precisely because they know what matters to begin with.

Comment author: alex_zag_al 09 August 2012 04:56:43AM 16 points [-]

perhaps the better advice, then, is "when things aren't working, consider the possibility that it's because your efforts are not going into what matters, rather than assuming it is because you need to work harder on the issues you're already focusing on"

Comment author: djcb 15 August 2012 03:30:13PM 3 points [-]

That's a much better advice than Godin's near-tautology.

Comment author: ChristianKl 08 August 2012 03:16:16PM 4 points [-]

Could you add the link if it was a blog post, or name the book if the source was a book?

Comment author: Stabilizer 09 August 2012 08:05:18PM 2 points [-]

Done.

Comment author: aausch 05 August 2012 07:52:35PM 15 points [-]

Did you teach him wisdom as well as valor, Ned? she wondered. Did you teach him how to kneel? The graveyards of the Seven Kingdoms were full of brave men who had never learned that lesson

-- Catelyn Stark, A Game of Thrones, George R. R. Martin

Comment author: Alicorn 05 August 2012 07:18:30PM 18 points [-]

My knee had a slight itch. I reached out my hand and scratched the knee in question. The itch was relieved and I was able to continue with my activities.

-- The dullest blog in the world

Comment author: Fyrius 02 September 2012 10:18:33AM 3 points [-]

...I don't really get why this is a rationality quote...

Comment author: Alicorn 02 September 2012 05:16:22PM 7 points [-]

Sometimes proceeding past obstacles is very straightforward.

Comment author: JQuinton 15 August 2012 09:43:56PM 6 points [-]

When I was a teenager (~15 years ago) I got tired of people going on and on with their awesome storytelling skills with magnificent punchlines. I was never a good storyteller, so I started telling mundane stories. For example, after someone in my group of friends would tell some amazing and entertaining story, I would start my story:

So this one time I got up. I put on some clothes. It turned out I was hungry, so I decided to go to the store. I bought some eggs, bread, and bacon. I paid for it, right? And then I left the store. I got to my apartment building and went up the stairs. I open my door and take the eggs, bacon, and bread out of the grocery bag. After that, I get a pan and start cooking the eggs and bacon, and put the bread in the toaster. After all of this, I put the cooked eggs and bacon on a plate and put some butter on my toast. I then started to eat my breakfast.

And that was it. People would look dumbfounded for a while waiting for a punchline or some amazing happening. When the realized none was coming and I was finished, they would start laughing. Granted, this little joke of mine I would only do if there was a long time of people telling amazing/funny stories.

Comment author: TheOtherDave 16 August 2012 12:16:56AM 13 points [-]

(nods) In the same spirit: "How many X does it take to change a lightbulb? One."

Though I am fonder of "How many of my political opponents does it take to change a lightbulb? More than one, because they are foolish and stupid."

Comment author: cousin_it 06 August 2012 11:53:25AM 6 points [-]

I had an itch on my elbow. I left it to see where it would go. It didn’t go anywhere.

-- The comments to that entry.

When I stumbled on that blog some years ago, it impressed me so much that I started trying to write and think in the same style.

Comment author: [deleted] 05 August 2012 10:23:42PM 2 points [-]

Why do I find that funny?

Comment author: lukeprog 05 August 2012 03:26:02AM 5 points [-]

He who knows best, best knows how little he knows.

Thomas Jefferson

Comment author: tastefullyOffensive 04 August 2012 11:30:32PM -1 points [-]

If you wish to make an apple pie from scratch you must first invent the universe. --Carl Sagan

Comment author: David_Gerard 04 August 2012 05:08:58PM 3 points [-]

Fiction is a branch of neurology.

-- J. G. Ballard (in a "what I'm working on" essay from 1966.)

Comment author: [deleted] 04 August 2012 06:09:52PM 23 points [-]

Take, say, physics, which restricts itself to extremely simple questions. If a molecule becomes too complex, they hand it over to the chemists. If it becomes too complex for them, they hand it to biologists. And if the system is too complex for them, they hand it to psychologists ... and so on until it ends up in the hands of historians or novelists.

Noam Chomsky

Comment author: [deleted] 05 August 2012 01:00:33AM 4 points [-]
Comment author: David_Gerard 04 August 2012 07:25:08PM 3 points [-]

Ballard does note later in the same essay "Neurology is a branch of fiction."

Comment author: arundelo 04 August 2012 07:46:23PM 11 points [-]

I am a strange loop and so can you!

Comment author: summerstay 04 August 2012 02:41:24PM *  38 points [-]

Interviewer: How do you answer critics who suggest that your team is playing god here?

Craig Venter: Oh... we're not playing.

Comment author: lukeprog 04 August 2012 10:28:30AM 10 points [-]

Reductionism is the most natural thing in the world to grasp. It's simply the belief that "a whole can be understood completely if you understand its parts, and the nature of their sum." No one in her left brain could reject reductionism.

Douglas Hofstadter

Comment author: ChristianKl 11 August 2012 11:21:09AM 5 points [-]

The interesting thing is that Hofstadter doesn't seem to argue here that reductionism is true but that it's a powerful meme that easily gets into people brain.

Comment author: [deleted] 06 August 2012 07:56:27AM 9 points [-]

ADBOC. Literally, that's true (but tautologous), but it suggests that understanding the nature of their sum is simple, which it isn't. Knowing the Standard Model gives hardly any insight into sociology, even though societies are made of elementary particles.

Comment author: Mitchell_Porter 04 August 2012 10:32:56AM 8 points [-]

That quote is supposed to be paired with another quote about holism.

Comment author: chaosmosis 05 August 2012 11:55:00PM *  1 point [-]

Q: What did the strange loop say to the cow? A: MU!

Comment author: Alejandro1 06 August 2012 03:54:30AM 5 points [-]

-- Knock knock.

-- Who is it?

-- Interrupting koan.

-- Interrupting ko-

-- MU!!!

Comment author: D_Malik 04 August 2012 04:15:04AM *  14 points [-]

Only the ideas that we actually live are of any value.

-- Hermann Hesse, Demian

Comment author: roland 03 August 2012 08:47:38PM 0 points [-]

However, the facile explanations provided by the left brain interpreter may also enhance the opinion of a person about themselves and produce strong biases which prevent the person from seeing themselves in the light of reality and repeating patterns of behavior which led to past failures. The explanations generated by the left brain interpreter may be balanced by right brain systems which follow the constraints of reality to a closer degree.

Comment author: harshhpareek 03 August 2012 04:22:53PM *  6 points [-]

To develop mathematics, one must always labor to substitute ideas for calculations.

-- Dirichlet

(Don't have source, but the following paper quotes it : Prolegomena to Any Future Qualitative Physics )

Comment author: MichaelHoward 03 August 2012 11:41:24AM 9 points [-]

Should we add a point to these quote posts, that before posting a quote you should check there is a reference to it's original source or context? Not necessarily to add to the quote, but you should be able to find it if challenged.

wikiquote.org seems fairly diligent at sourcing quotes, but Google doesn't rank it highly in search results compared to all the misattributed, misquoted or just plain made up on the spot nuggets of disinformation that have gone viral and colonized Googlespace lying in wait to catch the unwary (such as apparently myself).

Comment author: RichardKennaway 03 August 2012 12:05:45PM 4 points [-]

Yes, and also a point to check whether the quote has been posted to LW already.

Comment author: Delta 03 August 2012 10:41:45AM 55 points [-]

“Ignorance killed the cat; curiosity was framed!” ― C.J. Cherryh

(not sure if that is who said it originally, but that's the first creditation I found)

Comment author: roland 03 August 2012 08:56:07AM 28 points [-]

Yes -- and to me, that's a perfect illustration of why experiments are relevant in the first place! More often than not, the only reason we need experiments is that we're not smart enough. After the experiment has been done, if we've learned anything worth knowing at all, then hopefully we've learned why the experiment wasn't necessary to begin with -- why it wouldn't have made sense for the world to be any other way. But we're too dumb to figure it out ourselves! --Scott Aaronson

Comment author: faul_sname 03 August 2012 05:39:22PM 3 points [-]

Or at least confirmation bias makes it seem that way.

Comment author: roland 03 August 2012 08:49:06PM 6 points [-]

Also hindsight bias. But I still think the quote has a perfectly valid point.

Comment author: faul_sname 04 August 2012 07:54:51PM 3 points [-]

Agreed.

Comment author: katydee 03 August 2012 08:35:20AM 19 points [-]

I have always thought that one man of tolerable abilities may work great changes, and accomplish great affairs among mankind, if he first forms a good plan, and, cutting off all amusements or other employments that would divert his attention, makes the execution of that same plan his sole study and business.

-- Benjamin Franklin