Oy, I'm not following you either; apologies. You said:
The common criticism of Pearl is that this assumption fails if one assumes quantum mechanics is true.
...implying that people generally criticize his theory for "breaking" at quantum mechanics. That is, to find a system outside his "subset of causal systems" critics have to reach all the way to quantum mechanics. He could respond "well, QM causes a lot of trouble for a lot of theories." Not bullet-proof, but still. However, you started (your very first comment) by saying...
That philosophy itself can't be supported by empirical evidence; it rests on something else.
Right, and I'm asking you what you think that "something else" is.
I'd also re-assert my challenge to you: if philosophy's arguments don't rest on some evidence of some kind, what distinguishes it from nonsense/fiction?
I think you are making a category error. If something makes claims about phenomena that can be proved/disproved with evidence in the world, it's science, not philosophy.
Hmm.. I suspect the phrasing "evidence/phenomena in the world" might give my assertion an overly mechanistic sound to it. I don't mean verifiable/disprovable physical/atomistic facts must be cited-- that would be begging the question. I just mean any meaningful argument must make reference to evidence that can be pointed to in support of/ in criticism of the given argument. Not...
Continental philosophy, on the other hand, if you can manage to make sense of it, actually >can provide new perspectives on the world, and in that sense is worthwhile. Don't assume >that just because you can't understand it, it doesn't have anything to say.
It's not that people coming from the outside don't understand the language. I'm not just frustrated the Hegel uses esoteric terms and writes poorly. (Much the same could be said of Kant, and I love Kant.) It's that, when I ask "hey, okay, if the language is just tough, but there is content ...
If they can't stop students from using Wikipedia, pretty soon schools will be reduced from teaching how to gather facts, to teaching how to think!
This is what kind of rubs me the wrong way about the above "idea selection" point. Is the implication here that the only utility of working through Hume or Kant's original text is to cull the "correct" facts from the chaff? Seems like working through the text could be good for other reasons.
I agree generally that this is what an irrational value would mean. However, the presiding implicit assumption was that the utilitarian ends were the correct, and therefore the presiding explicit assumption (or at least, I thought it was presiding... now I can't seem to get anyone to defend it, so maybe not) was that therefore the most efficient means to these particular ends were the most rational.
Maybe I was misunderstanding the presiding assumption, though. It was just stuff like this:
...Lesswrongers will be encouraged to learn that the Torchwood charact
It seems like you are seeing my replies as soldier-arguments for the object-level question about the sacrifice of children, stumped on a particular conclusion that sacrificing children is right, while I'm merely giving opinion-neutral meta-comments about the semantics of such opinions. (I'm not sure I'm reading this right.)
...so you're NOT attempting to respond to my original question? My original question was "what's irrational about not sacrificing the children?"
Wonderful post.
...Because the brain is a hodge podge of dirty hacks and disconnected units, smoothing over and reinterpreting their behaviors to be part of a consistent whole is necessary to have a unified 'self'. Drescher makes a somewhat related conjecture in "Good and Real", introducing the idea of consciousness as a 'Cartesian Camcorder', a mental module which records and plays back perceptions and outputs from other parts of the brain, in a continuous stream. It's the idea of "I am not the one who thinks my thoughts, I am the one who hea
Okay, so I'll ask again: why couldn't the humans real preference be to not sacrifice the children? Remember, you said:
You can't decide your preference, preference is not what you actually do, it is what you should do
You haven't really elucidated this. You're either pulling an ought out of nowhere, or you're saying "preference is what you should do if you want to win". In the latter case, you still haven't explained why giving up the children is winning, and not doing so is not winning.
And the link you gave doesn't help at all, since, if we're...
All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.
Okay... so again, I'll ask... why is it irrational to NOT sacrifice the children? How does it go against hidden preference (which, perhaps, it would be prudent to define)?
That's not a particularly helpful or elucidating response. Can you flesh out your position? It's impossible to tell what it is based on the paltry statements you've provided. Are you asserting that the "equation" or "hidden preference" is the same for all humans, or ought to be the same, and therefore is something objective/rational?
Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".
Couldn't these people care about not sacrificing autonomy, and therefore this would be a value that they're successfully fulfilling?
Which of the decision is (actually) the better one depends on the preferences of one who decides
So if said planet decided that its preference was to perish, rather than sacrifice children, would this be irrational?
...However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculati
If a decision decreases utility, is it not irrational?
I don't see how you could go about proving this.
...As for the trolley problem, what we are dealing with is the aftermath of the trolley problem. If you save the people on the trolley, it could be argued that you have behaved dishonourably, but what about the people you saved? Surely they are innocent of your decision. If humanity is honourably wiped out by the space monsters, is that better than having some humans behave dishonourably and others (i.e. those who favoured resistance, but were powerless
Well, sure, when you phrase it like that. But your language begs the question: it assumes that the desire for dignity/autonomy is just an impulse/fuzzy feeling, while the desire for preserving human life is an objective good that is the proper aim for all (see my other posts above). This sounds probable to be me, but it doesn't sound obvious/ rationally derived/ etc.
I could after all, phrase it in the reverse manner. IF I assume that dignity/autonomy is objectively good:
...then the question becomes "everyone preserves their objectively good dignity&qu
What do the space monsters deserve?
Haha, I was not factoring that in. I assumed they were evil. Perhaps that was close minded of me, though.
The first scenario is better for both space monsters and humans. Sure, in the second scenario, the humans theoretically don't lose their dignity, but what does dignity mean to the dead?
Some people would say that dying honorably is better than living dishonorably. I'm not endorsing this view, I'm just trying to figure out why it's irrational, while the utilitarian sacrifice of children is more rational.
...To put i
Yeah, the sentiment expressed in that post is usually my instinct too.
But then again, that's the problem: it's an instinct. If my utilitarian impulse is just another impulse, then why does it automatically outweigh any other moral impulses I have, such as a value of human autonomy? If my utilitarian impulse is NOT just an impulse, but somehow is objectively more rational and outranks other moral impulses, then I have yet to see a proof of this.
I don't quite understand how your rhetorical question is analogous here. Can you flesh it out a bit?
I don't think the notion of dignity is completely meaningless. After all, we don't just want the maximum number of people to be happy, we also want people to get what they deserve-- in other words, we want people to deserve their happiness. If only 10% of the world were decent people, and everyone else were immoral, which scenario would seem the more morally agreeable: the scenario in which the 10% good people were ensured perennial happiness at the expense ...
The mistake philosophers tend to make is in accepting rationalism proper, the view that our moral intuitions (assumed to be roughly correct) must be ultimately justified by some sort of rational theory that we’ve yet to discover.
The author seems to assert that this is a cultural phenomenon. I wonder if our attempts at unifying into a theory might not be instinctive, however. Would it then be so obvious that Moral Realism were false? We have an innate demand for consistency in our moral principles, that might allow us to say something like "racism i...
The example of the paralysis anosognosia rationalization is, for some reason, extremely depressing to me.
Does anyone understand why this only happens in split brain patients when their right hemisphere motivates an action? Shouldn't it happen quite often, since the right side has no way of communicating to the left side "its time to try a new theory," and the left side is the one that we'll be talking to?
If they are being called "fundamentally mental" because they interact by one set of rules with things that are mental and a different set of rules with things that are not mental, then it's not consistent with a reductionist worldview...
Is it therefore a priori logically incoherent? That's what I'm trying to understand. Would you exclude a "cartesian theatre" fundamental particle a priori?
...(and it's also confused because you're not getting at how mental is different from non-mental). However, if they are being called fundamentally m
what if qualions really existed, in a material way and there were physical laws describing how they were caught and accumulated by neural cells. There's absolutely no evidence for such a theory, so it's crazy, but its not logically impossible or inconsistent with reductionism, right?
Hmm... excellent point. Here I do think it begins to get fuzzy... what if these qualions fundamentally did stuff that we typically attribute to higher-level functions, such as making decisions? Could there be a "self" qualion? Could their behavior be indeterministi...
Reductionism, as I understand it, is the idea that the higher levels are completely explained by (are completely determined by) the lower levels. Any fundamentally new type of particle found would just be added to what we consider "lower level".
Oh! Certainly. But this doesn't seem to exclude "mind", or some element thereof, from being irreducible-- which is what Eliezer was trying to argue, right? He's trying to support reductionism, and this seems to include an attack on "fundamentally mental" entities. Based on what you'r...
QM possesses some fundamental level of complexity, but I wouldn't agree in this context that it's "fundamentally complicated".
I see what you mean. It's certainly a good distinction to make, even if it's difficult to articulate. Again, though, I think it's Occam's Razor and induction that makes us prefer the simpler entities-- they aren't the sole inhabitants of the territory by default.
Indeed, an irreducible entity (albeit with describable, predictable, behavior) is not much better than a miracle. This is why Occam's Razor, insisting that our model of the world should not postulate needless entities, insists that everything should be reduced to one type of stuff if possible. But the "if possible" is key: we verify through inference and induction whether or not it's reasonable to think we'll be able to reduce everything, not through a priori logic.
That said, I wonder if the claim can't be near-equivalently rephrased "it's impossible to imagine a non-reductionist scenario without populating it with your own arbitrary fictions".
Ah, that's very interesting. Now we're getting somewhere.
I don't think it has to be arbitrary. Couldn't the following scenario be the case?:
The universe is full of entities that experiments show reducible to fundamental elements with laws (say, quarks), or entities that induction + parsimony tells us ought to be reducible to fundamental elements (since these entiti...
Of course it's technically possible that the territory will play a game of supernatural and support a fundamental object behaving according to a high-level concept in your mind. But this is improbable to an extent of being impossible, a priori, without need for further experiments to drive the certainty to absolute.
Not quite sure what you're saying here. If you're saying:
1)"Entities in the map will not magically jump into the territory," Then I never disagreed with this. What I disagreed with is your labeling certain things as obviously in the...
To loqi and Nesov:
Again, both of your responses seem to hinge on the fact that my challenge below is easily answerable, and has already been answered:
...Tell me the obvious, a priori logically necessary criteria for a person to distinguish between "entities within the territory" and "high-level concepts." If you can't give any, then this is a big problem: you don't know that the higher level entities aren't within the territory. They could be within the territory, or they could be "computational abstractions." Either position i
This doesn't really answer the question, though. I know that a priori means "prior to experience", but what does this consist of? Originally, for something to be "a priori illogical", it was supposed to mean that it couldn't be thought without contradicting oneself, because of pre-experiential rules of thought. An example would be two straight lines on a flat surface forming a bounded figure-- it's not just wrong, but inconceivable. As far as I can tell, an irreducible entity doesn't possess this inconceivability, so I'm trying to figur...
Thanks, that is helpful.
My claim was that, if we simply represent the gears example by representing the underlying (classical) physics of the system via Pearl's functional causal models, there's nothing cyclic about the system. Thus, Pearl's causal theory doesn't need to resort to the messy expensive stuff for such systems. It only needs to get messy in systems which are a) cyclic, and b) implausible to model via their physics-- for example, negative and positive feedback loops (smoking causes cancer causes despair causes smoking).