Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: AFinerGrain 02 October 2017 11:52:39PM 0 points [-]

I originally learned about these ideas from Thinking Fast and Slow, but I love hearing them rephrased and repeated again and again. Thinking clearly often means getting in the cognitive habit of questioning every knee-jerk intuition.

On the other hand, coming from a Bryan Caplan / Michael Huemer perspective, aren't we kind of stuck with some set of base intuitions? Intuitions like; I exist, the universe exists, other people exist, effects have causes, I'm not replaced by a new person with memory implants every time I go to sleep...

You might even call these base intuitions, "magic," in the sense that you have to have faith in them in order to do anything like rationality.

Comment author: TheAncientGeek 03 October 2017 11:37:19AM *  0 points [-]

Well, we don't know if they work magically, because we don't know that they work at all. They are just unavoidable.

It's not that philosophers weirdly and unreasonably prefer intuition to empirical facts and mathematical/logical reasoning, it is that they have reasoned that they can't do without them: that (the whole history of) empiricism and maths as foundations themselves rest on no further foundation except their intuitive appeal. That is the essence of the Inconvenient Ineradicability of Intuition. An unfounded foundation is what philosophers mean by "intuition". Philosophers talk about intution a lot because that is where arguments and trains of thought ground out...it is away of cutting to the chase. Most arguers and arguments are able to work out the consequences of basic intutitions correctly, so disagrements are likely to arise form differencs in basic intuitions themselves.

Philosophers therefore appeal to intuitions because they can't see how to avoid them...whatever a line of thought grounds out in, is definitiionally an intuition. It is not a case of using inutioins when there are better alternatives, epistemologically speaking. And the critics of their use of intuitions tend to be people who haven't seen the problem of unfounded foundations because they have never thought deeply enough, not people who have solved the problem of finding sub-foundations for your foundational assumptions.

Scientists are typically taught that the basic principles maths, logic and empiricism are their foundations, and take that uncritically, without digging deeper. Empircism is presented as a black bx that produces the goods...somehow. Their subculture encourages use of basic principles to move forward, not a turn backwards to critically relflect on the validity of basic principles. That does not mean the foundational principles are not "there". Considering the foundational principles of science is a major part of philosophy of science, and philosophy of science is a philosophy-like enterprise, not a science-like enterprise, in the sense it consists of problems that have been open for a long time, and which do not have straightforward empirical solutions.

Does the use of empiricism shortcut the need for intuitions, in the sense of unfounded foundations?

For one thing, epistemology in general needs foundational assumptions as much as anything else. Which is to say that epistemogy needs epistemology as much as anything else. -- to judge the validity of one system of epistemology, you need another one. There is no way of judging an epistemology starting from zero, from a complete blank. Since epistemology is inescapable, and since every epistemology has its basic assumptions, there are basic assumptions involved in empiricism.

Empiricism specifically has the problem of needing an ontological foundation. Philosophy illustrates this point with sceptical scenarios about how you are being systematically deceived by an evil genie. Scientific thinkers have closely parallel scenarios in which humans cannot be sure whether you are not in the Matrix or some other virtual reality. Either way, these hypotheses illustrate the point that the empiricists are running on an assumption that if you can see something, it is there.

Comment author: RobbBB 24 May 2015 07:02:50PM *  5 points [-]

No, that is not how it works: I don't need to either accept or reject MWI. I can also treat it as a causal story lacking empirical content.

To say that MWI lacks empirical content is also to say that the negation of MWI lacks empirical content. So this doesn't tell us, for example, whether to assign higher probability to MWI or to the disjunction of all non-MWI interpretations.

Suppose your ancestors sent out a spaceship eons ago, and by your calculations it recently traveled so far away that no physical process could ever cause you and the spaceship to interact again. If you then want to say that 'the claim the spaceship still exists lacks empirical content,' then OK. But you will also have to say 'the claim the spaceship blipped out of existence when it traveled far enough away lacks empirical content'.

And there will still be some probability, given the evidence, that the spaceship did vs. didn't blip out of existence; and just saying 'it lacks empirical content!' will not tell you whether to design future spaceships so that their life support systems keep operating past the point of no return.

By that logic, if I invent any crazy hypothesis in addition to an empirically testable theory, then it inherits testability just on those grounds. You can do that with the word "testabiity" if you want, but that seems to be not how people use words.

There's no ambiguity if you clarify whether you're talking about the additional crazy hypothesis, vs. talking about the conjunction 'additional crazy hypothesis + empirically testable theory'. Presumably you're imagining a scenario where the conjunction taken as a whole is testable, though one of the conjuncts is not. So just say that.

Sean Carroll summarizes collapse-flavored QM as the conjunction of these five claims:

  1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.

  2. Wave functions evolve in time according to the Schrödinger equation.

  3. The act of measuring a quantum system returns a number, known as the eigenvalue of the quantity being measured.

  4. The probability of getting any particular eigenvalue is equal to the square of the amplitude for that eigenvalue.

  5. After the measurement is performed, the wave function “collapses” to a new state in which the wave function is localized precisely on the observed eigenvalue (as opposed to being in a superposition of many different possibilities).

Many-worlds-flavored QM, on the other hand, is the conjunction of 1 and 2, plus the negation of 5 -- i.e., it's an affirmation of wave functions and their dynamics (which effectively all physicists agree about), plus a rejection of the 'collapses' some theorists add to keep the world small and probabilistic. (If you'd like, you could supplement 'not 5' with 'not Bohmian mechanics'; but for present purposes we can mostly lump Bohm in with multiverse interpretations, because Eliezer's blog series is mostly about rejecting collapse rather than about affirming a particular non-collapse view.)

If we want 'QM' to be the neutral content shared by all these interpretations, then we can say that QM is simply the conjunction of 1 and 2. You are then free to say that we should assign 50% probability to claim 5, and maintain agnosticism between collapse and non-collapse views. But realize that, logically, either collapse or its negation does have to be true. You can frame denying collapse as 'positing invisible extra worlds', but you can equally frame denying collapse as 'skepticism about positing invisible extra causal laws'.

Since every possible way the universe could be adds something 'extra' on top of what we observe -- either an extra law (e.g., collapse) or extra ontology (because there are no collapses occurring to periodically annihilate the ontology entailed by the Schrodinger equation) -- it's somewhat missing the point to attack any given interpretation for the crime of positing something extra. The more relevant question is just whether simplicity considerations or indirect evidence helps us decide which 'something extra' (a physical law, or more 'stuff', or both) is the right one. If not, then we stick with a relatively flat prior.

Claims 1 and 2 are testable, which is why we were able to acquire evidence for QM in the first place. Claim 5 is testable for pretty much any particular 'collapse' interpretation you have in mind; which means the negation of claim 5 is also testable. So all parts of bare-bones MWI are testable (though it may be impractical to run many of the tests), as long as we're comparing MWI to collapse and not to Bohmian Mechanics.

(You can, of course, object that affirming 3-5 as fundamental laws has the advantage of getting us empirical adequacy. But 'MWI (and therefore also 'bare' QM) isn't empirically adequate' is a completely different objection from 'MWI asserts too many unobserved things', and in fact the two arguments are in tension: it's precisely because Eliezer isn't willing to commit himself to a mechanism for the Born probabilities in the absence of definitive evidence that he's sticking to 'bare' MWI and leaving almost entirely open how these relate to the Born rule. In the one case you'd be criticizing MWI theorists for refusing to stick their neck out and make some guesses about which untested physical laws and ontologies are the real ones; in the other case you'd be criticizing MWI theorists for making guesses about which untested physical laws and ontologies are the real ones.)

I am not super interested in having catholic theologians read about minimum descriptive complexity, and then weaving a yarn about their favorite hypotheses based on that.

Are you kidding? I would love it if theologians stopped hand-waving about how their God is 'ineffably simple no really we promise' and started trying to construct arguments that God (and, more importantly, the package deal 'God + universe') is information-theoretically simple, e.g., by trying to write a simple program that outputs Biblical morality plus the laws of physics. At best, that sort of precision would make it much clearer where the reasoning errors are; at worst, it would be entertainingly novel.

Comment author: TheAncientGeek 15 September 2017 02:11:02PM 0 points [-]

Many-worlds-flavored QM, on the other hand, is the conjunction of 1 and 2, plus the negation of 5

Plus 6: There is a preferred basis.

Comment author: RobbBB 24 May 2015 01:03:20AM *  2 points [-]

That's not true. (Or, at best, it's misleading for present purposes.)

First, it's important to keep in mind that if MWI is "untestable" relative to non-MWI, then non-MWI is also "untestable" relative to MWI. To use this as an argument against MWI, you'd need to talk specifically about which hypothesis MWI is untestable relative to; and you would then need to cite some other reason to reject MWI (e.g., its complexity relative to the other hypothesis, or its failures relative to some third hypothesis that it is testable relative to).

With that in mind:

  • 1 - MWI is testable insofar as QM itself is testable. We normally ignore this fact because we're presupposing QM, but it's important to keep in mind if we're trying to make a general claim like 'MWI is unscientific because it's untestable and lacks evidential support'. MWI is at least as testable as QM, and has at least as much supporting evidence.

  • 2 - What I think people really mean to say (or what a steel-manned version of them would say) is that multiverse-style interpretations of QM are untestable relative to each other. This looks likely to be true, for practical purposes, when we're comparing non-collapse interpretations: Bohmian Mechanics doesn't look testable relative to Many Threads, for example. (And therefore Many Threads isn't testable relative to Bohmian Mechanics, either.)

(Of course, many of the things we call "Many Worlds" are not fully fleshed out interpretations, so it's a bit risky to make a strong statement right now about what will turn out to be testable in the real world. But this is at least a commonly accepted bit of guesswork on the part of theoretical physicists and philosophers of physics.)

  • 3 - But, importantly, collapse interpretations generally are empirically distinguishable from non-collapse interpretations. So even though non-collapse interpretations are generally thought to be 'untestable' relative to each other, they are testable relative to collapse interpretations. (And collapse interpretations as a rule are falsifiable relative to each other.)

To date, attempts to test collapse interpretations have falsified the relevant interpretations. It is not technologically possible yet to test the most popular present-day ones, but it is possible for collapse theorists to argue 'our views should get more attention because they're easier to empirically distinguish', and it's also possible for anti-collapse theorists to try to make inductive arguments from past failures to the likelihood of future failures, with varying amounts of success.

Comment author: TheAncientGeek 15 September 2017 02:04:14PM 0 points [-]

First, it's important to keep in mind that if MWI is "untestable" relative to non-MWI, then non-MWI is also "untestable" relative to MWI. To use this as an argument against MWI,

I think it's being used as an argument against beliefs paying rent.

MWI is testable insofar as QM itself is testable.

Since there is more than one interpretation of QM, empirically testing QM does not prove any one interpretation over the others. Whatever extra arguments are used to support a particular interpretation over the others are not going to be, and have not been, empirical.

But, importantly, collapse interpretations generally are empirically distinguishable from non-collapse interpretations.

No they are not, because of the meaning of the word "interpretation" but collapse theories, such as GRW, might be.

Comment author: RobbBB 22 May 2015 09:49:24PM *  11 points [-]

Thanks for taking the time to explain your reasoning, Mark. I'm sorry to hear you won't be continuing the discussion group! Is anyone else here interested in leading that project, out of curiosity? I was getting a lot out of seeing people's reactions.

I think John Maxwell's response to your core argument is a good one. Since we're talking about the Sequences, I'll note that this dilemma is the topic of the Science and Rationality sequence:

In any case, right now you've got people dismissing cryonics out of hand as "not scientific", like it was some kind of pharmaceutical you could easily administer to 1000 patients and see what happened. "Call me when cryonicists actually revive someone," they say; which, as Mike Li observes, is like saying "I refuse to get into this ambulance; call me when it's actually at the hospital". Maybe Martin Gardner warned them against believing in strange things without experimental evidence. So they wait for the definite unmistakable verdict of Science, while their family and friends and 150,000 people per day are dying right now, and might or might not be savable—

—a calculated bet you could only make rationally [i.e., using your own inference skills, without just echoing data from an experimental study, and without just echoing established, expert-verified scientific conclusions].

The drive of Science is to obtain a mountain of evidence so huge that not even fallible human scientists can misread it. But even that sometimes goes wrong, when people become confused about which theory predicts what, or bake extremely-hard-to-test components into an early version of their theory. And sometimes you just can't get clear experimental evidence at all.

Either way, you have to try to do the thing that Science doesn't trust anyone to do—think rationally, and figure out the answer before you get clubbed over the head with it.

(Oh, and sometimes a disconfirming experimental result looks like: "Your entire species has just been wiped out! You are now scientifically required to relinquish your theory. If you publicly recant, good for you! Remember, it takes a strong mind to give up strongly held beliefs. Feel free to try another hypothesis next time!")

This is why there's a lot of emphasis on hard-to-test ("philosophical") questions in the Sequences, even though people are notorious for getting those wrong more often than scientific questions -- because sometimes (e.g., in the case of cryonics and existential risk) the answer matters a lot for our decision-making, long before we have a definitive scientific answer. That doesn't mean we should despair of empirically investigating these questions, but it does mean that our decision-making needs to be high-quality even during periods where we're still in a state of high uncertainty.

The Sequences talk about the Many Worlds Interpretation precisely because it's an unusually-difficult-to-test topic. The idea isn't that this is a completely typical example, or that it's a good idea to disregard evidence when it is available; the idea, rather, is that we sometimes do need to predicate our decisions on our best guess in the absence of perfect tests.

Its placement in Rationality: From AI to Zombies immediately after the 'zombies' sequence (which, incidentally, is an example of how and why we should reject philosophical thought experiments, no matter how intuitively compelling they are, when they don't accord with established scientific theories and data) is deliberate. Rather than reading either sequence as an attempt to defend a specific fleshed-out theory of consciousness or of physical law, they should primarily be read as attempts to show that extreme uncertainty about a domain doesn't always bleed over into 'we don't know anything about this topic' or 'we can't rule out any of the candidate solutions'.

We can effectively rule out epiphenomenalism as a candidate solution to the hard problem of consciousness even if we don't know the answer to the hard problem (which we don't), and we can effectively rule out 'consciousness causes collapse' and 'there is no objective reality' as candidate solutions to the measurement problem in QM even if we don't know the answer to the measurement problem (which, again, we don't). Just advocating 'physicalism' or 'many worlds' is a promissory note, not a solution.

In discussions of EA and x-risk, we likewise need to be able to prioritize more promising hypotheses over less promising ones long before we've answered all the questions we'd like answered. Even deciding what studies to fund presupposes that we've 'philosophized', in the sense of mentally aggregating, heuristically analyzing, and drawing tentative conclusions from giant complicated accumulated-over-a-lifetime data sets.

You wrote:

The human capacity for abstract reasoning over mathematical models is in principle a fully general intelligent behaviour, as the scientific revolution has shown: there is no aspect of the natural world which has remained beyond the reach of human understanding, once a sufficient amount of evidence is available.

That's true, and it's one of the basic assumptions behind MIRI research: that understanding agents smarter than us isn't obviously hopeless, because our human capacity for abstract reasoning makes it possible for us to model systems even when they're extremely complex and dynamic. MIRI's research is intended to make this likelier to happen.

It's not the default that we're always able to predict what our inventions will do before we run them to see what happens; and there are some basic limits on our ability to do so when the system we're predicting is smarter than the predictor. But with enough intellectual progress we may become able to model abstract safety-relevant features of AGI behavior, even though we can't predict in detail the exact decisions the AGI will make. (If we could predict the exact decisions of the AGI, we'd have to be at least as smart as the AGI.)

If it isn't possible to learn a variety of generalizations about smarter autonomous systems, then, interestingly, that also undermines the case for intelligence explosion. Both 'humans trying to make superintelligent AI safe' and 'AI undergoing a series of recursive self-improvements' are cases where less intelligent agents are trying to reliably generate agents that meet various abstract criteria (including superior intelligence). The orthogonality thesis, likewise, simultaneously supports the claim 'many possible AI systems won't have humane goals' and 'it is possible for an AI system to have human goals'. This is why Bostrom/Yudkowsky-type arguments don't uniformly inspire pessimism.

Are you familiar with MIRI's technical agenda? You may also want to check out the AI Impacts project, if you think we should be prioritizing forecasting work at this point rather than object-level mathematical research.

Comment author: TheAncientGeek 15 September 2017 01:48:34PM 0 points [-]

This is why there's a lot of emphasis on hard-to-test ("philosophical") questions in the Sequences, even though people are notorious for getting those wrong more often than scientific questions -- because sometimes [..] the answer matters a lot for our decision-making,

Which is one of the ways in which beliefs that don't pay rent do pay rent.

Comment author: Erfeyah 05 September 2017 06:27:51PM 0 points [-]

Your comment seems to me an indication that you don't understand what I am talking about. It is a complex subject and in order to formulate a coherent rational argument you will need to study it in some depth.

Comment author: TheAncientGeek 15 September 2017 01:33:26PM 0 points [-]

I am not familiar with Peterson specifically, but I recognise the underpinning in terms of Jung, monomyth theory, and so on.

Comment author: TheAncientGeek 15 September 2017 01:24:57PM *  0 points [-]

, a state is good when it engages our moral sensibilities s

Individually, or collectively?

We don't encode locks, but we do encode morality.

Individually or collectively?

Namely, goodness of a state of affairs is something that I can assess myself from outside a simulation of that state. I don't need to simulate anything else to see it

The goodness-to-you or the objective goodness?

if you are going say that morality "is" human value, you are faced with the fact that humans vary in their values..the fact that creates the suspicion of relativism.

This, I suppose, is why some people think that Eliezer's metaethics is just warmed-over relativism, despite his protestations.

It's not clearly relativism and it's not clearly not-relativism. Those of us who are confused by it. are confused because we expect a metaethical theory to say something on the subject.

The opposite of Relative is Absolute or Objective. It isn't Intrinsic. You seem to be talking about something orthogonal to the absolute-relative axis.

Comment author: chaosmage 11 September 2017 08:19:25AM 1 point [-]

"services that go visit the customer outcompete ones that the customer has to go visit" - and what does this have to do with self-driving cars? Whether the doctor has to actively drive the car to travel to the patient, or can just sit there in the car while the car drives all the way, the same time is still lost due to the travel, and the same fuel is still used up.

Yes. But a significant part of the job of a doctor is paperwork (filing stuff for insurance companies etc.) and she can do that while the car drives itself. If she had to hire a driver (and have her sit idly while she's with a patient), the driver would be the most expensive part of her vehicle, just like the taxi driver is the most expensive part of the taxi.

If she's the kind of doctor that can carry all her equipment inside that car (i.e. not a radiologist 😉) she might even be able to abolish her office and waiting room entirely, for extra savings.

And self driving hotel rooms? What, are we in the Harry Potter world where things can be larger in the inside than in the outside?

No, we're in a world where tourists generally don't mind going slowly and enjoying the view. These things would be pretty big on the outside, at least RV size, but they wouldn't be RVs. They wouldn't usually have kitchens and their showers would have to be way nicer than typical RV showers.

Comment author: TheAncientGeek 13 September 2017 12:12:26PM 0 points [-]

No, we're in a world where tourists generally don't mind going slowly and enjoying the view. These things would be pretty big on the outside, at least RV size, but they wouldn't be RVs. They wouldn't usually have kitchens and their showers would have to be way nicer than typical RV showers.

And they could relocate overnight. That raises the possibility of self-driving sleeper cars for business travellers who need to be somewhere by morning.

Comment author: Erfeyah 05 September 2017 02:58:01PM 0 points [-]

[3] Some mixture. Morality doesn't have to be one thing, or achieved in one way.

Sure this is a valid hypothesis. But my assessment and the individual points I offered above can be applied to this possibility as well uncovering the same issues with it.

In particular, novel technologies and social situations provoke novel moral quandaries that intuition is not well equipped to handle , and where people debate such things, they tend to use a broadly rationalist style, trying to find common principles, noting undesirable consequences.

Novel situations can be seen through the lens of certain stories because they are acting to such a level of abstraction that they are applicable to all human situations. The most universal and permanent levels of abstraction are considered archetypal. These would apply equally to a human living in a cave thousands of years ago and a Wall Street lawyer. Of course it is also true that the stories always need to be revisited to avoid their dissolution into dogma as the environment changes. Interestingly it turns out that there are stories that recognize this need for 'revisiting' and deal with the strategies and pitfalls of the process.

Comment author: TheAncientGeek 05 September 2017 05:39:50PM 0 points [-]

That amounts to "I can make my theory work if I keep on adding epicycles".

Comment author: Erfeyah 30 August 2017 10:19:12AM *  3 points [-]

It is extremely interesting to see the attempts of the community to justify through or extract values from rationality. I have been pointing to the alternative perspective, based on the work of Jordan Peterson, in which morality is grounded on evolved behavioral patterns. It is rationally coherent and strongly supported by evidence. The only 'downside' if you can call it that is that it turns out that morality is not based on rationality and the "ought from an is" problem is an accurate portrayal of our current (and maybe general) situation.

I am not going to expand on this unless you are interested but I have a question. What does the rationalist community in general, and your article, try to get at? I can think of two possibilities:

[1] that morality is based on rational thought as expressed through language

[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition

I do not see how [1] can be true since we can observe the emergence of moral values in cultures in which rationality is hardly developed. Furthermore, even today as your article shows, we are straggling to extract value from rational argument so, our intuition can not be stemming from something we haven't even succeeded at. As for [2], it is a very interesting proposal but I haven't seen any scientific evidence that link it to structures in the human brain.

I feel the rationalist community is resistant in entertaining the alternative because, if true, it would show that rationality is not the foundation of everything but a tool of assessing and manipulating. Maybe further resistance is caused because (in an slightly embarrassing turn of events) it brings stories, myth and religion into the picture again, albeit in a very different manner. But even if that proves to be the case so what? What is our highest priority here? Rationality or Truth?

Comment author: TheAncientGeek 05 September 2017 01:13:06PM 0 points [-]

can think of two possibilities:

[1] that morality is based on rational thought as expressed through language

[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition..

[3] Some mixture. Morality doesn't have to be one thing, or achieved in one way. In particular, novel technologies and social situations provoke novel moral quandaries that intuition is not well equipped to handle , and where people debate such things, they tend to use a broadly rationalist style, trying to find common principles, noting undesirable consequences.

Comment author: John_Maxwell_IV 30 August 2017 12:51:03AM *  2 points [-]

Glad to see you're working on this, it looks pretty nice!

I think the bottleneck for efforts like this is typically marketing, not code. (Analogy: If you want to found a city, the first step is not to go off alone in to the wilderness and build a bunch of houses.) I think I've seen other argument mapping sites, and it seems like every few months someone announces a new & improved discussion website on SlateStarCodex (then it proceeds to not get traction). I suspect the solution is to form a committee/"human kickstarter" of some kind so that everyone who's interested in this problem can coordinate to populate the same site simultaneously. For a project like yours that already has code, the best approach might be to try to join forces with a blogger who already has traffic, or a discussion site that already has a demand for a debate map, or something like that.

Comment author: TheAncientGeek 30 August 2017 11:54:48AM 0 points [-]

Seconded.

View more: Next