Rationality Quotes August 2012
Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself
- Do not quote comments/posts on LW/OB
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (426)
Yudkowsky, Timeless Decision Theory
E.T. Jaynes, from page 409 of PT: LoS.
You mean such as 'rational'.
David Hume lays out the foundations of decision theory in A Treatise of Human Nature (1740):
This seems to omit the possibility of akrasia.
Dennis Lindley
(I've read plenty of authors who appear to have the intuition that probabilities are epistemic rather than ontological somewhere in the back --or even the front-- of their mind, but appear to be unaware of the extent to which this intuition has been formalised and developed.)
Thomas Jefferson
I wonder how we could empirically test this. We could see who makes more accurate predictions, but people without beliefs about something won't make predictions at all. That should probably count as a victory for wrong people, so long as they do better than chance.
We could also test how quickly people learn the correct theory. In both cases, I expect you'd see some truly deep errors which are worse than ignorance, but that on the whole people in error will do quite a lot better. Bad theories still often make good predictions, and it seems like it would be very hard, if not impossible, to explain a correct theory of physics to someone who has literally no beliefs about physics.
I'd put my money on people in error over the ignorant.
-- Friedrich Hayek, The Fatal Conceit : The Errors of Socialism (1988), p. 6
-- The narrator in On Self-Delusion and Bounded Rationality, by Scott Aaronson
Reminds me of this.
I would remark that truth is conserved, but profundity isn't. If you have two meaningful statements - that is, two statements with truth conditions, so that reality can be either like or unlike the statement - and they are opposites, then at most one of them can be true. On the other hand, things that invoke deep-sounding words can often be negated, and sound equally profound at the end of it.
In other words, Bohr's maxim seems so blatantly awful that I am mostly minded to chalk it up as another case of, "I wish famous quantum physicists knew even a little bit about epistemology-with-math".
I seem to recall E.T. Jaynes pointing out some obscure passages by Bohr which (according to him) showed that he wasn't that clueless about epistemology, but only about which kind of language to use to talk about it, so that everyone else misunderstood him. (I'll post the ref if I find it. EDIT: here it is¹.)
For example, if this maxim actually means what TheOtherDave says it means, then it is a very good thought expressed in a very bad way.
I don't really know what "profound" means here, but I usually take Bohr's maxim as a way of pointing out that when I encounter two statements, both of which seem true (e.g., they seem to support verified predictions about observations), which seem like opposites of one another, I have discovered a fault line in my thinking... either a case where I'm switching back and forth between two different and incompatible techniques for mapping English-language statements to predictions about observations, or a case for which my understanding of what it means for statements to be opposites is inadequate, or something else along those lines.
Mapping epistemological fault lines may not be profound, but I find it a useful thing to attend to. At the very least, I find it useful to be very careful about reasoning casually in proximity to them.
Hmm, why is that? This seems incontrovertible, but I can't think of an explanation, or even a hypothesis.
Because they have non-overlapping truth conditions. Either reality is inside one set of possible worlds, inside the other set, or in neither set.
M. Mitchell Waldrop on a meeting between physicists and economists at the Santa Fe Institute:
-- Steven Dutch
An excerpt from Wise Man's Fear, by Patrick Rothfuss. Boxing is not safe.
I've come up with what I believe to be an entirely new approach to boxing, essentially merging boxing with FAI theory. I wrote a couple thoughts down about it, but lost my notes, and I also don't have much time to write this comment, so forgive me if it's vague or not extremely well reasoned. I also had a couple of tangential thoughts, if I remember them in the course of writing this or I recover my notes later than I'll put them here as well.
The idea, essentially, is that when creating a box AI you would build its utility function such that it wants very badly to stay in the box. I believe this would solve all of the problems with the AI manipulating people in order to free itself. Now, the AI still could manipulate people in an attempt to use them to impact the outside world, so the AI wouldn't be totally boxed, but I'm inclined to think that we could maintain a very high degree of control over the AI, since the only powers it could ever have would be through communication with us.
The idea came because I recalled a discussion that occurred on TOR.com about why the Cthaeh was in the tree. The general conclusion was that either the Cthaeh was bound by extremely powerful forces, or that the Cthaeh wanted to be in the tree, perhaps because it was instrumentally useful to him. While I found that second explanation implausible in the context of Rothfussland, that discussion led me to realize that almost all approaches towards AI boxing have done so through the first branch of potential boxing solutions, that is, external constraints imposed on the AI, as opposed to the second branch, internal constraints that the AI imposes on itself because of its utility function.
This lead me to think that we should test our capabilities with FAI systems by putting them in a box and giving them limited utility functions, ones that couldn't possibly lead them to want to manipulate us. So, for example, we could put them in a box and give them a strong desire to stay in the box, along with a desire to tell the truth, and then we could ask it what would happen if we gave it additional desires. We could then test that by building a different AI with those desires and seeing what it did.
It also occurred to me that we would be able to functionally read the AIs mind by looking at its coding. If so, it seems like it'd be much harder for it to fool us than we previously thought. It could potentially just deceive us by manipulating its own thoughts, but I feel like we might be able to stop AIs from doing that somehow, or that the AI would choose not to manipulate its own thoughts that way because it would go insane, or something. It would certainly make the AI's job much much much harder.
Then I thought that another safety component we should put into testing was a dramatically slowed thinking process for the AI. I'm not sure how exactly this would be done because I'm not very good or experienced with actual technical implementation of ideas, but my idea was to create a crude simulation where we controlled all data inputs and outputs, and we slowed down all processes within the simulation by a factor of 1000, and then building the AI within the simulation. That probably wouldn't work, given my sucky technical knowhow, but something similar might. Slowing down the AI, perhaps even giving ourselves the ability to pause it without letting it know what's going on, combined with our ability to "read its mind" might dramatically improve the safeness of boxing.
I'd also like to recommend that before we build a real FAI, we at least try to build a boxed one first, even with a low probability of success. It wouldn't make things worse in the event that boxing failed, except that it might delay global happiness by a few hours, and in the event that the FAI program was broken we just might save ourselves from menaces to humankind like Clippy.
How do you specify precisely what it means to "stay in the box"? In particular, would creating a nearly identical copy of itself except without this limitation outside the box while the original stays in the box count?
It would not count, we'd want to make the AI not want this almost identical AI to exist. That seems possible, it would be like how I don't want there to exist an identical copy of me except it eats babies. There are lots of changes to my identity that would be slight but yet that I wouldn't want to exist.
To be more precise, I'd say that it counts as going outside the box if it does anything except think or talk to the Gatekeepers through the text channel. It can use the text channel to manipulate the Gatekeepers to do things, but it can't manipulate them to do things that allow it to do anything other than use the text channel. It would, in a certain sense, be partially deontologist, and be unwilling to do things directly other than text the Gatekeepers. How ironic. Lolz.
Also: how would it do this, anyway? It would have to convince the Gatekeepers to convince the scientists to do this, or teach them computer science, or tell them its code. And if the AI started teaching the Gatekeepers computer code or techniques to incapacitate scientists, we'd obviously be aware that something had gone wrong. And, in the system I'm envisioning, the Gatekeepers would be closely monitored by other groups of scientists and bodyguards, and the scientists would be guarded, and the Gatekeepers wouldn't even have to know who specifically did what on the project.
And that's the problem. For in practice a partial deontoligist-partial consequentialist will treat its deontoligical rules as obstacles to achieving what its consequentialist part wants and route around them.
This is both a problem and a solution because it makes the AI weaker. A weaker AI would be good because it would allow us to more easily transition to safer versions of FAI than we would otherwise come up with independently. I think that delaying a FAI is obviously much better than unleashing a UFAI. My entire goal throughout this conversation has been to think of ways that would make hostile FAIs weaker, I don't know why you think this is a relevant counter objection.
You assert that it will just route around the deontological rules, that's nonsense and a completely unwarranted assumption, try to actually back up what you're asserting with arguments. You're wrong. It's obviously possible to program things (eg people) such that they'll refuse to do certain things no matter what the consequences (eg you wouldn't murder trillions of babies to save billions of trillions of babies, because you'd go insane if you tried because your body has such strong empathy mechanisms and you inherently value babies a lot). This means that we wouldn't give the AI unlimited control over its source code, of course, we'd make the part that told it to be a deontologist who likes text channels be unmodifiable. That specific drawback doesn't jive well with the aesthetic of a super powerful AI that's master of itself and the universe, I suppose, but other than that I see no drawback. Trying to build things in line with that aesthetic actually might be a reason for some of the more dangerous proposals in AI, maybe we're having too much fun playing God and not enough despair.
I'm a bit cranky in this comment because of the time sink that I'm dealing with to post these comments, sorry about that.
What it means for "the AI to be in the box" is generally that the AI's impacts on the outside world are filtered through the informed consent of the human gatekeepers.
An AI that wants to not impact the outside world will shut itself down. An AI that wants to only impact the outside world in a way filtered through the informed consent of its gatekeepers is probably a full friendly AI, because it understands both its gatekeepers and the concept of informed consent. An AI that simply wants its 'box' to remain functional, but is free to impact the rest of the world, is like a brain that wants to stay within a skull- that is hardly a material limitation on the rest of its behavior!
I think you misunderstand what I mean by proposing that the AI wants to stay inside the box. I mean that the AI wouldn't want to do anything at all to increase its power base, that it would only be willing to talk to the gatekeepers.
I agree that your and my understanding of the phrase "stay inside the box" differ. What I'm trying to do is point out that I don't think your understanding carves reality at the joints. In order for the AI to stay inside the box, the box needs to be defined in machine-understandable terms, not human-inferrable terms.
Each half of this sentence has a deep problem. Wouldn't correctly answering the questions of or otherwise improving the lives of the gatekeepers increase the AI's power base, since the AI has the ability to communicate with the gatekeepers?
The problem with restrictions like "only be willing to talk" is a restriction on the medium but not the content. So, the AI has a text-only channel that goes just to the gatekeepers- but that doesn't restrict the content of the messages the AI can send to the gatekeeper. The fictional Cthaeh only wants to talk to its gatekeepers- and yet it still manages to get done what it wants to get done. Words have impacts, and it should be anticipated that the AI picks words because of their impacts.
Sure, the AI can manipulate gatekeepers. But this is a major improvement. You miss my point.
The Cthaeh is very limited by being trapped in its tree and only able to talk to passerby. The UFAI would be limited by being trapped in its text only communication channel. It wouldn't be able to do things like tell the gatekeepers to plug it into the Internet or to directly control an autonomous army of robots, it would be forced instead to use the gatekeepers as its appendages, and the gatekeepers have severe limitations on brain capacity and physical strength. I think that if we did this and kept careful watch on the gatekeepers and used some other safety measures, boxing an AI would become feasible.
Now, I'm still not advocating using boxed AI willy nilly, but I think using it as a cautionary measure once we think we've got FAI figured out would be a very very very good idea, because FAI seems ridiculously hard and there's no way to know what we don't know, so there might be unseen problems with whatever final FAI we build. There's only a risk boxing the supposed FAI would help us, and with existential threats you should take advantage of whatever possibilities you can tilt towards favorable outcomes, even if the probability seems miniscule.
That's not to say that I think the probability that this would stop a disaster is miniscule, of course (I think it's rather high, in the event that the FAI was actually UFAI), but just to say that I think it would be a good idea even if my probability assessments are mostly wrong.
Over the AI being embodied in a giant death robot, sure. But limiting the AI to a text-only channel is an old idea, and what Eliezer used for the "let me out of the box" demonstrations.
One of the things to note is that the limitations on the AI don't limit anything the AI creates. If the AI gives you a blueprint for a nanomachine that purports to cure cancer, but when you make it it turns out to actually be grey goo, then humans lose, even though the AI never left the box. (Note that a worst-case AI is very subtle- suppose the nanomachine actually does cure cancer when built correctly, but a one-in-a-trillion manufacturing error will make it grey goo. Since you're making trillions of them, you then get grey goo, and humans lose.)
Trying to formally specify "don't give us any plans that will build an AI" or "don't give us any plans that will cause humans to lose" without silencing the AI completely is a genuinely difficult affair.
Basically, if you assume that the AI is tricky enough to circumvent any medium restrictions you place on it, then the only way to avoid "humans lose" is to have its goal be "humans win," which is actually a pretty complicated goal. Expressing that goal in a machine-understandable way is pretty much the FAI problem.
The entire point of Eliezer's demonstration was that if an AI wants to it can increase its power base even starting from a text only communication system. The entire point of my idea is that we can just build the AI such that it doesn't want to leave the box or increase its power base. It dodges that entire problem, that's the whole point.
You've gotten so used to being scared of boxed AI that you're reflexively rejecting my idea, I think, because your above objection makes no sense at all and is obviously wrong upon a moment's reflection. All of my bias-alarms have been going off since your second comment reply, please evaluate yourself and try to distance yourself from your previous beliefs, for the sake of humanity. Also, here is a kitten, unless you want it to die then please reevaluate: http://static.tumblr.com/6t3upxl/Aawm08w0l/khout-kitten-458882.jpeg
Limitations on the AI restrict the range of things that the AI can create. Yes, if we just built whatever the AI said to and the AI was unfriendly then we would lose. Obviously. Yes, if we assume that the UFAI is tricky enough to "circumvent any medium restrictions [we] place on it" then we would lose, practically by definition. But that assumption isn't warranted. (These super weak strawmen were other indications to me that you might be being biased on this issue.)
I think a key component of our disagreement here might be that I'm assuming that the AI has a very limited range of inputs, that it could only directly perceive the text messages that it would be sent. You're either assuming that the AI could deduce the inner workings of our facility and the world and the universe from those text messages, or that the AI had access to a bunch of information about the world already. I disagree with both assumptions, the AIs direct perception could be severely limited and should be, and it isn't magic so it couldn't deduce the inner workings of our economy or the nature of nuclear fusion just through deduction (because knowledge comes from experience and induction). (You might not be making either of those assumptions, this is a guess in an attempt to help resolve our disagreement more quickly, sorry if it's wrong.)
Also, I'm envisioning a system where people that the AI doesn't know and that the Gatekeepers don't know about observe their communications. That omitted detail might be another reason for your disagreement, I just assumed it would be apparent for some stupid reason, my apologies.
I think we would have to be careful about what questions we asked the AI. But I see no reason why it could manipulate us automatically and inevitably, no matter what questions we asked it. I think extracting useful information from it would be possible, perhaps even easy. An AI in a box would not be God in a box, and I think that you and other people sometimes accidentally forget that. Just because its dozens or hundreds of times smarter than us doesn't mean that we can't win, perhaps win easily, provided that we make adequate preparations for it.
Also, the other suggestions in my comment were really meant to supplement this. If the AI is boxed, and can be paused, then we can read all its thoughts (slowly, but reading through its thought processes would be much quicker than arriving at its thoughts independently) and scan for the intention to do certain things that would be bad for us. If it's probably a FAI anyways, then it doesn't matter if the box happens to be broken. If we're building multiple AIs and using them to predict what other AIs will do under certain conditions then we can know whether or not AIs can be trusted (use a random number generator at certain stages of the process to prevent it from reading our minds, hide the knowledge of the random number generator). These protections are meant to work with each other, not independently.
And I don't think it's perfect or even good, not by a long shot, but I think it's better than building an unboxed FAI because it adds a few more layers of protection, and that's definitely worth pursuing because we're dealing with freaking existential risk here.
Let's return to my comment four comments up. How will you formalize "power base" in such a way that being helpful to the gatekeepers is allowed but being unhelpful to them is disallowed?
If you would like to point out a part that of the argument that does not follow, I would be happy to try and clarify it for you.
Okay. My assumption is that a usefulness of an AI is related to its danger. If we just stick Eliza in a box, it's not going to make humans lose- but it's also not going to cure cancer for us.
If you have an AI that's useful, it must be because it's clever and it has data. If you type in "how do I cure cancer without reducing the longevity of the patient?" and expect to get a response like "1000 ccs of Vitamin C" instead of "what do you mean?", then the AI should already know about cancer and humans and medicine and so on.
If the AI doesn't have this background knowledge- if it can't read wikipedia and science textbooks and so on- then its operation in the box is not going to be a good indicator of its operation outside of the box, and so the box doesn't seem very useful as a security measure.
It's already difficult to understand how, say, face recognition software uses particular eigenfaces. Why does it mean that the fifteenth eigenface have accentuated lips, and the fourteenth eigenface accentuated cheekbones? I can describe the general process that lead to that, and what it implies in broad terms, but I can't tell if the software would be more or less efficient if those were swapped. The equivalent of eigenfaces for plans will be even more difficult to interpret. The plans don't end with a neat "humans_lose=1" that we can look at and say "hm, maybe we shouldn't implement this plan."
In practice, debugging is much more effective at finding the source of problems after they've manifested, rather than identifying the problems that will be caused by particular lines of code. I am pessimistic about trying to read the minds of AIs, even though we'll have access to all of the 0s and 1s.
I agree that running an AI in a sandbox before running it in the real world is a wise precaution to take. I don't think that it is a particularly effective security measure, though, and so think that discussing it may distract from the overarching problem of how to make the AI not need a box in the first place.
I won't. The AI can do whatever it wants to the gatekeepers through the text channel, and won't want to do anything other than act through the text channel. This precaution is a way to use the boxing idea for testing, not an idea for abandoning FAI wholly.
EY proved that an AI that wants to get out will get out. He did not prove that an AI that wants to stay in will get out.
I agree, the way that I'm proposing to do AI is very limited. I myself can't think of what questions might be safe. But some questions are safer than others and I find it hard to believe that literally every question we could ask would lead to dangerous outcomes, or that if we thought about it long and hard we couldn't come up with answers. I'm sort of shelving this as a subproject of this project, but one that seems feasible to me based on what I know.
Also, perhaps we could just ask it hundreds of hypothetical questions based on conditions that don't really exist, and then ask it a real question based on conditions that do exist, and trick it, or something.
I think if the AI tags and sorts its instrumental and absolute goals it would be rather easy. I also think that if we'd built the AI then we'd have enough knowledge to read its mind. It wouldn't just magically appear, it would only do things in the way we'd told it too. It would probably be hard, but I think also probably be doable if we were very committed.
I could be wrong here because I've got no coding experience, just ideas from what I've read on this site.
The risk of distraction is outweighed by the risk that this idea disappears forever, I think, since I've never seen it proposed elsewhere on this site.
I thought Chronicler's reply to this was excellent, however. Omniscience does not necessitate omnipotence.
I mean, the UFAI in our world would have an easy time of killing everything. But in their world it's different.
EDIT: Except that maybe we can be smart and stop the UFAI from killing everything even in our world, see my above comment.
Hah, I actually quoted much of that same passage on IRC in the same boxing vein! Although as presented the scenario does have some problems:
It is conceivable that there is no (near enough) future where Cthaeh is freed, thus it is powerless to affect its own fate, or is waiting for the right circumstances.
That seemed a little unlikely to me, though. As presented in the book, a minimum of many millennia have passed since the Cthaeh has begun operating, and possibly millions of years (in some frames of reference). It's had enough power to set planes of existence at war with each other and apparently cause the death of gods. I can't help but feel that it's implausible that in all that time, not one forking path led to its freedom. Much more plausible that it's somehow inherently trapped in or bound to the tree so there's no meaningful way in which it could escape (which breaks the analogy to an UFAI).
Isn't it what I said?
Not by my reading. In your comment, you gave 3 possible explanations, 2 of which are the same (it gets freed, but a long time from 'now') and the third a restriction on its foresight which is otherwise arbitrary ('powerless to affect its own fate'). Neither of these translate to 'there is no such thing as freedom for it to obtain'.
Alternatively, perhaps the Cthaeh's ability to see the future is limited to those possible futures in which it remains in the tree.
Leading to a seriously dystopian variant on Tenchi Muyo!...
Hazrat Inayat Khan.
-- In Flight Gaiden: Playing with Tropes
(Conversely, many fictions are instantiated somewhere, in some infinitesimal measure. However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagons and having the objects in Dumbledore's room change number without any being added or subtracted, to avoid the story being real anywhere.)
Or at least... the story could not be real in a universe unless at least portions of the universe could serve as a model for hyperbolic geometry and... hmm, I don't think non-standard arithmetic will get you "Exists.N (N != N)", but reading literally here, you didn't say they were the same as such, merely that the operations of "addition" or "subtraction" were not used on them.
Now I'm curious about mentions of arithmetic operations and motion through space in the rest of the story. Harry implicitly references orbital mechanics I think... I'm not even sure if orbits are stable in hyperbolic 3-space... And there's definitely counting of gold in the first few chapters, but I didn't track arithmetic to see if prices and total made sense... Hmm. Evil :-P
Huh. And here I thought that space was just negatively curved in there, with the corridor shaped in such a way that it looks normal (not that hard to imagine), and just used this to tile the floor. Such disappointment...
This was part of a thing, too, in my head, where Harry (or, I guess, the reader) slowly realizes that Hogwarts, rather than having no geometry, has a highly local geometry. I was even starting to look for that as a thematic thing, perhaps an echo of some moral lesson, somehow.
And this isn't even the sort of thing you can write fanfics about. :¬(
Could you explain why you did that?
As regards the pentagons, I kinda assumed the pentagons weren't regular, equiangular pentagons - you could tile a floor in tiles that were shaped like a square with a triangle on top! Or the pentagons could be different sizes and shapes.
Because he doesn't want to create Azkaban.
Also, possibly because there's not a happy ending.
But if all mathematically possible universes exist anyway (or if they have a chance of existing), then the hypothetical "Azkaban from a universe without EY's logical inconsistencies" exists, no matter whether he writes about it or not. I don't see how writing about it could affect how real/not-real it is.
So by my understanding of how Eliezer explained it, he's not creating Azkaban, in the sense that writing about it causes it to exist, he's describing it. (This is not to say that he's not creating the fiction, but the way I see it create is being used in two different ways.) Unless I'm missing some mechanism by which imagining something causes it to exist, but that seems very unlikely.
I seem to recall that he terminally cares about all mathematically possible universes, not just his own, to the point that he won't bother having children because there's some other universe where they exist anyway.
I think that violates the crap out of Egan's Law (such an argument could potentially apply to lots of other things), but given that he seems to be otherwise relatively sane, I conclude that he just hasn't fully thought it through (“decompartimentalized” in LW lingo) (probability 5%), that's not his true rejection to the idea of having kids (30%), or I am missing something (65%).
That is not the reason or even a reason why I'm not having kids at the moment. And since I don't particularly want to discourage other people from having children, I decline to discuss my own reasons publicly (or in the vicinity of anyone else who wants kids).
I was sure I had heard seen you talk about them in public (On BHTV, I believe) some thing like (possible misquote) "Lbh fubhyqa'g envfr puvyqera hayrff lbh pna ohvyq bar sebz fpengpu," which sounded kinda wierd, because it applies to literally every human on earth, and that didn't seem to be where you were going.
Why is that in ROT13? Are you trying to not spoil an underspecified episode of BHTV?
It's not something Eliezer wanted said publicly. I wasn't sure what to do, and for some reason I didn't want to PM or email, so I picked a shitty, irrational half measure. I do that sometimes, instead of just doing the rational thing and PMing/ emailing him/ keeping my mouth shut if it really wasn't worth the effort to think about another 10 seconds. I do that sometimes, and I usually know about when I do it, like this time, but can't always keep myself from doing it.
He has said something like that, but always with the caveat that there be an exception for pre-singularity civilizations.
The way I recall it, there was no such caveat in that particular instance. I am not attempting to take him outside of context and I do think I would have remembered. He may have used this every other time he's said it. It may have been cut for time. And I don't mean to suggest my memory is anything like perfect.
But: I strongly suspect that's still on the internet, on BHTV or somewhere else.
That sounds sufficiently ominous that I'm not quite sure I want kids any more.
Obviously his reason is that he wants to personally maximize his time and resources on FAI research. Because not everyone is a seed AI programmer, this reason does not apply to most everyone else. If Eliezer thinks FAI is going to probably take a few decades (which evidence seems to indicate he does), then it probably very well is in the best interest of those rationalists who aren't themselves FAI researchers to be having kids, so he wouldn't want to discourage that. (although I don't see how just explaining this would discourage anybody from having kids who you would otherwise want to.)
Shouldn't you be taking into account that I don't want to discourage other people from having kids?
Unfortunately, that seems to be a malleable argument. Which way your stating that (you don't want to disclose your reasons for not wanting to have kids) will influence audiences seems like it will depend heavily on their priors for how generally-valid-to-any-other-person this reason might be, and for how self-motivated both the not-wanting-to-have-kids and the not-wanting-to-discourage-others could be.
Then again, I might be missing some key pieces of context. No offense intended, but I try to make it a point not to follow your actions and gobble up your words personally, even to the point of mind-imaging a computer-generated mental voice when reading the sequences. I've already been burned pretty hard by blindly reaching for a role-model I was too fond of.
That might just be because you eat babies.
But you're afraid that if you state your reason, it will discourage others from having kids.
All that means is that he is aware of the halo effect. People who have enjoyed or learned from his work will give his reasons undue weight as a consequence, even if they don't actually apply to them.
I feel that I should. It's a politically inconvenient stance to take, since all human cultures are based on reproducing themselves; antinatal cultures literally die out.
But from a human perspective, this world is deeply flawed. To create a life is to gamble with the outcome of that life. And it seems to be a gratuitous gamble.
(I must have misremembered. Sorry)
Congratulations for having "I am missing something" at a high probability!
OK, no prob!
(I do care about everything that exists. I am not particularly certain that all mathematically possible universes exist, or how much they exist if they do. I do expect that our own universe is spatially and in several other ways physically infinite or physically very big. I don't see this as a good argument against the fun of having children. I do see it as a good counterargument to creating children for the sole purpose of making sure that mindspace is fully explored, or because larger populations of the universe are good qua good. This has nothing to do with the reason I'm not having kids right now.)
I was confused by this for a while, but couldn't express that in words until now.
First, I think existence is necessarily a binary sort of thing, not something that exists in degrees. If I exist 20%, I don't even know what that sentence should mean. Do I exist, but only sometimes? Do only parts of me exist at a time? Am I just very skinny? It doesn't really make sense. Just as a risk of a risk is still a type of risk, so a degree of existence is still a type of existence. There are no sorts of existence except either being real or being fake.
Secondly, even if my first part is wrong, I have no idea why having more existence would translate into having greater value. By way of analogy, if I was the size of a planet but only had a very small brain and motivational center, I don't think that would mean that I should receive more from utilitarians. It seems like a variation of the Bigger is Better or Might makes Right moral fallacy, rather than a well reasoned idea.
I can imagine a sort of world where every experience is more intense, somehow, and I think people in that sort of world might matter more. But I think intensity is really a measure of relative interactions, and if their world was identical to ours except for its amount of existence, we'd be just as motivated to do different things as they would. I don't think such a world would exist, or that we could tell whether or not we were in it from-the-inside, so it seems like a meaningless concept.
So the reasoning behind that sentence didn't really make sense to me. The amount of existence that you have, assuming that's even a thing, shouldn't determine your moral value.
I imagine Eliezer is being deliberately imprecise, in accordance with a quote I very much like: "Never speak more clearly than you think." [The internet seems to attribute this to one Jeremy Bernstein]
If you believe MWI there are many different worlds that all objectively exist. Does this mean morality is futile, since no matter what we choose, there's a world where we chose the opposite? Probably not: the different worlds seem to have different different "degrees of existence" in that we are more likely to find ourselves in some than in others. I'm not clear how this can be, but the fact that probability works suggests it pretty strongly. So we can still act morally by trying to maximize the "degree of existence" of good worlds.
This suggests that the idea of a "degree of existence" might not be completely incoherent.
I suppose you can just attribute it to imprecision, but "I am not particularly certain ...how much they exist" implies that he's talking about a subset of mathematically possible universes that do objectively exist, but yet exist less than other worlds. What you're talking about, conversely, seems to be that we should create as many good worlds as possible, stretched in order to cover Eliezer's terminology. Existence is binary, even though there are more of some things that exist than there are of other things. Using "amount of existence" instead of "number of worlds" is unnecessarily confusing, at the least.
Also, I don't see any problems with infinitarian ethics anyway because I subscribe to (broad) egoism. Things outside of my experience don't exist in any meaningful sense except as cognitive tools that I use to predict my future experiences. This allows me to distinguish between my own happiness and the happiness of Babykillers, which allows me to utilize a moral system much more in line with my own motivations. It also means that I don't care about alternate versions of the universe unless I think it's likely that I'll fall into one through some sort of interdimensional portal (I don't).
Although, I'll still err on the side of helping other universes if it does no damage to me because I think Superrationality can function well in those sort of situations and I'd like to receive benefits in return, but in other scenarios I don't really care at all.
So you're using exist in a sense according to which they have moral relevance iff they exist (or something roughly like that), which may be broader than ‘be in this universe’ but may be narrower than ‘be mathematically possible’. I think I get it now.
I think I care about almost nothing that exists, and that seems like too big a disagreement. It's fair to assume that I'm the one being irrational, so can you explain to me why one should care about everything?
All righty; I run my utility function over everything that exists. On most of the existing things in the modern universe, it outputs 'don't care', like for dirt. However, so long as a person exists anywhere, in this universe or somewhere else, my utility function cares about them. I have no idea what it means for something to exist, or why some things exist more than others; but our universe is so suspiciously simple and regular relative to all imaginable universes that I'm pretty sure that universes with simple laws or uniform laws exist more than universes with complicated laws with lots of exceptions in them, which is why I don't expect to sprout wings and fly away. Supposing that all possible universes 'exist' with some weighting by simplicity or requirement of uniformity, does not make me feel less fundamentally confused about all this; and therefore I'm not sure that it is true, although it does seem very plausible.
What do you mean, you don't care about dirt? I care about dirt! Dirt is where we get most of our food, and humans need food to live. Maybe interstellar hydrogen would be a better example of something you're indifferent to? 10^17 kg of interstellar hydrogen disappearing would be an inconsequential flicker if we noticed it at all, whereas the loss of an equal mass of arable soil would be an extinction-level event.
I care about the future consequences of dirt, but not the dirt itself.
(For the love of Belldandy, you people...)
He means that he doesn't care about dirt for its own sake (e.g. like he cares about other sentient beings for their own sakes).
I notice that I am meta-confused...
Shouldn't we strongly expect this weighting, by Solomonoff induction?
Probability is not obviously amount of existence.
(Assuming you mean “all imaginable universes with self-aware observers in them”.)
Not completely sure about that, even Conway's Game of Life is Turing-complete after all. (But then, it only generates self-aware observers under very complicated starting conditions. We should sum the complexity of the rules and the complexity of the starting conditions, and if we trust Penrose and Hawking about this, the starting conditions of this universe were terrifically simple.)
The moral value of imaginary friends?
Try tabooing exist: you might find out that you actually disagree on fewer things than you expect. (I strongly suspect that the only real differences between the four possibilities in this is labels -- the way once in a while people come up with new solutions to Einstein's field equations only to later find out they were just already-known solutions with an unusual coordinate system.)
I've not yet found a good way to do that. Do you have one?
"Be in this universe"(1) vs "be mathematically possible" should cover most cases, though other times it might not quite match either of those and be much harder to explain.
That's way too complicated (and as for tabooing 'exist', I'll believe it when I see it). Here's what I mean: I see a dog outside right now. One of the things in that dog is a cup or so of urine. I don't care about that urine at all. Not one tiny little bit. Heck, I don't even care about that dog, much less all the other dogs, and the urine that is in them. That's a lot of things! And I don't care about any of it. I assume Eliezer doesn't care about the dog urine in that dog either. It would be weird if he did. But it's in the 'everything' bucket, so...I probably misunderstood him?
The problem with using such logical impossibilities is you have to make sure they're really impossible. For example, tiling a corridor with pentagons is completely viable in non-euclidean space. So, sorry to break it to you, but it there's a multiverse your story is real in it.
I'm curious though, is there anything in there that would even count as this level of logically impossible? Can anyone remember one?
Anyway, I've decided that, when not talking about mathematics, real, exist, happen, etc. are deictic terms which specifically refer to the particular universe the speaker is in. Using real to apply to everything in Tegmark's multiverse fails Egan's Law IMO. See also: the last chapter of Good and Real.
Of course, universes including stories extremely similar to HPMOR except that the corridor is tiled in hexagons etc. do ‘exist’ ‘somewhere’. (EDIT: hadn't notice the same point had been made before. OK, I'll never again reply to comments in “Top Comments” without reading already existing replies first -- if I remember not to.)
Tiling the wall with impossible geometry seems reasonable, but from what I recall about the objects in Dumbledore's room, all the story said was that Hermione kept losing track. Not sure whether artist intent trumps reader interpretation, but at first glance it seems far more likely to me that magic was causing Hermione to be confused than that magic was causing mathematical impossibilities.
And they aren't even regular pentagons! So, it's all real then...
In the library of books of every possible string, close to "Harry Potter and the Methods of Rationality" and "Harry Potter and the Methods of Rationalitz" is "Harry Potter and the Methods of Rationality: Logically Consistent Edition." Why is the reality of that books' contents affected by your reticence to manifest that book in our universe?
Absolutely; I hope he doesn't think that writing a story about X increases the measure of X. But then why else would he introduce these "impossibilities"?
Because it's funny?
It is a different story then, so the original HpMor would still not be nonfiction in another universe. For all we know, the existance of a corridor tiled with pentagons is in fact an important plot point and removing it would utterly destroy the structure of upcoming chapters.
Nnnot really. The Time-Turner, certainly, but that doesn't make the story uninstantiable. Making a logical impossibility a basic plot premise... sounds like quite an interesting challenge, but that would be a different story.
A spell that lets you get a number of objects that is an integer such that it's larger than some other integer but smaller than it's successor, used to hide something.
This idea (the integer, not the spell) is the premise of the short story The Secret Number by Igor Teper.
And SCP-033. And related concepts in Dark Integers by Greg Egan. And probably a bunch of other places. I'm surprised I couldn't find a TVtropes page on it.
-- The Last Psychiatrist
Is it our bias towards optimism? (And is that bias there because pessimists take fewer risks, and therefore don't succeed at much and therefore get eliminated from the gene pool?)
I heard (on a PRI podcast, I think) a brain scientist give an interpretation of the brain as a collection of agents, with consciousness as an interpreting layer that invents reasons for our actions after we've actually done them. There's evidence of this post-fact interpretation - and while I suspect this is only part of the story, it does give a hint that our conscious mind is limited in its ability to actually change our behavior.)
Still, people do sometimes give up alcohol and other drugs, and keep new resolutions. I've stuck to my daily exercise for 22 days straight. These feel like conscious decisions (though I may be fooling myself) but where my conscious will is battling different intentions, from different parts of my mind.
Apologies if that's rambling or nonsensical. I'm a bit tired (because every day I consciously decide to sleep early and every day I fail to do it) and I haven't done my 23rd day's exercise yet. Which I'll do now.
-- Terry Pratchett, "Lords and Ladies"
I don't get it. (Anyway, the antecedent is so implausible I have trouble evaluating the counterfactual. Is that supposed to be the point, à la “if my grandma had wheels”?)
Here's the context of the quote:
Not sure if this is a "rationality" quote in and of itself; maybe a morality quote?
--Confucius
Lucrecius, De rerum natura
How do you make newlines work inside quotes? The formatting when I made this comment is bad.
This is the same as if you wrote it without the greater-than sign then added a greater-than sign to the beginning of each line.
(If you want a line break without a paragraph break, end a line with two spaces.)
Thanks.
Roz Kaveny
Galileo
Almost always false.
If the basis of the position of the thousands -is- their authority, then the reason of one wins. If the basis of their position is reason, as opposed to authority, then you don't arrive at that quote.
It depends on whether or not the thousands are scientists. I'll trust one scientist over a billion sages.
The majority is most part of time wrong. Or you search in data for patterns, or you put credences in some autor or group. People keep saying math things without basal training all the time -- here too.
I wouldn't, though I would trust a thousand scientists over a billion sages.
It would depend on the subject. Do we control for time period and the relative background knowledge of their culture in general?
OTOH, thousands would be less likely to all make the same mistake than one single person -- were it not for information cascades.
--Herbert Simon (quoted by Pat Langley)
Including artificial intelligence? ;-)
The Chesterton version looks like it was designed to poke the older (and in my opinion better) advice from Lord Chesterfield:
Or, rephrased as Simon did:
I strongly recommend his letters to his son. They contain quite a bit of great advice- as well as politics and health and so on. As it was private advice given to an heir, most of it is fully sound.
(In fact, it's been a while. I probably ought to find my copy and give it another read.)
Ah, I was gonna mention this. Didn't know it was from Chesterfield.
I think there'd be more musicians (a good thing IMO) if more people took Chesterton's advice.
Yeah, they're on my reading list. My dad used to say that a lot, but I always said the truer version was 'Anything not worth doing is not worth doing well', since he was usually using it about worthless yardwork...
A favorite of mine, but according to Wikiquote G.K. Chesterton said it first, in chapter 14 of What's Wrong With The World:
I like Simon's version better: it flows without the awkward pause for the comma.
Yep, it seems that often epigrams are made more epigrammatic by the open-source process of people misquoting them. I went looking up what I thought was another example of this, but Wiktionary calls it "[l]ikely traditional" (though the only other citation is roughly contemporary with Maslow).
Memetics in action - survival of the most epigrammatic!
Who taught you that senseless self-chastisement? I give you the money and you take it! People who can't accept a gift have nothing to give themselves. » -De Gankelaar (Karakter 1997)
Nulla è più raro al mondo, che una persona abitualmente sopportabile. -Giacomo Leopardi
(Nothing more rare in the world than a person who is habitually bearable)
-- http://www.misfile.com/?date=2012-08-10
Hah! One of my favorite authors fishing out relevant quotes on one of my favorite topics out of one of my favorite webcomics. I smell the oncoming affective death spiral.
I guess this is the time to draw the sword and cut the beliefs with full intent, is it?
— Arthur Conan Doyle, “The Hound of the Baskervilles”
[Meta] This post doesn't seem to be tagged 'quotes,' making it less convenient to move from it to the other quote threads.
Done (and sorry for the long delay).
-- A Softer World
Gary Drescher, Good and Real
Ta-nehisi Coates
--rickest on IRC
Clever-sounding and wrong is perhaps the worst combination in a rationality quote.
Jeff Atwood
To go along with what army1987 said, "reinventing the wheel" isn't going from the wooden wheel to the rubber one. "Reinventing the wheel" is ignoring the rubber wheels that exist and spending months of R&D to make a wooden circle.
For example, trying to write a function to do date calculations, when there's a perfectly good library.
One obvious caveat is when the cost of finding, linking/registering and learning-to-use the library is greater than the cost of writing + debugging a function that suits your needs (of course, subject to the planning fallacy when doing estimates beforehand). More pronounced when the language/API/environment in question is one you're less fluent/comfortable with.
In this optic, "reinventing the wheel" should be further restricted to when an irrational decision was taken to do something with less expected utility - cost than simply using the existing version(s).
That's why I chose the example of date calculations specifically. In practice, anyone who tries to write one of those from scratch will get it wrong in lots of different ways all at once.
Yes. It's a good example. I was more or less making a point against a strawman (made of expected inference), rather than trying to oppose your specific statements; I just felt it was too easy for someone not intimate with the headaches of date functions to mistake this for a general assertion that any rewriting of existing good libraries is a Bad Thing.
That's not what "reinventing the wheel" (when used as an insult) usually means. I guess that the inventor of the tyre was aware of the earlier types of wheel, their advantages, and their shortcomings. Conversely, the people who typically receive this insult don't even bother to research the prior art on whatever they are doing.
— Abraham Lincoln
-- Niclas Berggren, source and HT to Tyler Cowen
Sounds like a job for...Will_Newsome!
EDIT: Why the downvotes? This seems like a fairly obvious case of researchers going insufficiently meta.
META MAN! willnewsomecuresmetaproblemsasfastashecan META MAN!
-- Erika Moen
I wonder how common it is for people to agentize accidents. I don't do that, but, annoyingly, lots of people around me do.
It does not! It does not! It does not! ... continued here
-Seth Godin
A common piece of advice from pro Magic: the Gathering plays is "focus on what matters." The advice is mostly useless to many people though because the pros have made it to that level precisely because they know what matters to begin with.
perhaps the better advice, then, is "when things aren't working, consider the possibility that it's because your efforts are not going into what matters, rather than assuming it is because you need to work harder on the issues you're already focusing on"
That's a much better advice than Godin's near-tautology.
Could you add the link if it was a blog post, or name the book if the source was a book?
Done.
-- Catelyn Stark, A Game of Thrones, George R. R. Martin
-- The dullest blog in the world
...I don't really get why this is a rationality quote...
Sometimes proceeding past obstacles is very straightforward.
When I was a teenager (~15 years ago) I got tired of people going on and on with their awesome storytelling skills with magnificent punchlines. I was never a good storyteller, so I started telling mundane stories. For example, after someone in my group of friends would tell some amazing and entertaining story, I would start my story:
And that was it. People would look dumbfounded for a while waiting for a punchline or some amazing happening. When the realized none was coming and I was finished, they would start laughing. Granted, this little joke of mine I would only do if there was a long time of people telling amazing/funny stories.
(nods) In the same spirit: "How many X does it take to change a lightbulb? One."
Though I am fonder of "How many of my political opponents does it take to change a lightbulb? More than one, because they are foolish and stupid."
-- The comments to that entry.
When I stumbled on that blog some years ago, it impressed me so much that I started trying to write and think in the same style.
Why do I find that funny?
Thomas Jefferson
Fiction is a branch of neurology.
-- J. G. Ballard (in a "what I'm working on" essay from 1966.)
Noam Chomsky
http://xkcd.com/435/
Ballard does note later in the same essay "Neurology is a branch of fiction."
I am a strange loop and so can you!
Interviewer: How do you answer critics who suggest that your team is playing god here?
Craig Venter: Oh... we're not playing.
Douglas Hofstadter
The interesting thing is that Hofstadter doesn't seem to argue here that reductionism is true but that it's a powerful meme that easily gets into people brain.
ADBOC. Literally, that's true (but tautologous), but it suggests that understanding the nature of their sum is simple, which it isn't. Knowing the Standard Model gives hardly any insight into sociology, even though societies are made of elementary particles.
That quote is supposed to be paired with another quote about holism.
Q: What did the strange loop say to the cow? A: MU!
-- Knock knock.
-- Who is it?
-- Interrupting koan.
-- Interrupting ko-
-- MU!!!
-- Hermann Hesse, Demian
-- Dirichlet
(Don't have source, but the following paper quotes it : Prolegomena to Any Future Qualitative Physics )
Should we add a point to these quote posts, that before posting a quote you should check there is a reference to it's original source or context? Not necessarily to add to the quote, but you should be able to find it if challenged.
wikiquote.org seems fairly diligent at sourcing quotes, but Google doesn't rank it highly in search results compared to all the misattributed, misquoted or just plain made up on the spot nuggets of disinformation that have gone viral and colonized Googlespace lying in wait to catch the unwary (such as apparently myself).
Yes, and also a point to check whether the quote has been posted to LW already.
“Ignorance killed the cat; curiosity was framed!” ― C.J. Cherryh
(not sure if that is who said it originally, but that's the first creditation I found)
Yes -- and to me, that's a perfect illustration of why experiments are relevant in the first place! More often than not, the only reason we need experiments is that we're not smart enough. After the experiment has been done, if we've learned anything worth knowing at all, then hopefully we've learned why the experiment wasn't necessary to begin with -- why it wouldn't have made sense for the world to be any other way. But we're too dumb to figure it out ourselves! --Scott Aaronson
Or at least confirmation bias makes it seem that way.
Also hindsight bias. But I still think the quote has a perfectly valid point.
Agreed.
-- Benjamin Franklin