All of jwdink's Comments + Replies

Thanks, that is helpful.

My claim was that, if we simply represent the gears example by representing the underlying (classical) physics of the system via Pearl's functional causal models, there's nothing cyclic about the system. Thus, Pearl's causal theory doesn't need to resort to the messy expensive stuff for such systems. It only needs to get messy in systems which are a) cyclic, and b) implausible to model via their physics-- for example, negative and positive feedback loops (smoking causes cancer causes despair causes smoking).

Oy, I'm not following you either; apologies. You said:

The common criticism of Pearl is that this assumption fails if one assumes quantum mechanics is true.

...implying that people generally criticize his theory for "breaking" at quantum mechanics. That is, to find a system outside his "subset of causal systems" critics have to reach all the way to quantum mechanics. He could respond "well, QM causes a lot of trouble for a lot of theories." Not bullet-proof, but still. However, you started (your very first comment) by saying... (read more)

2[anonymous]
Hopefully the following clarifies my position. In what follows, "Pearl's causal theory" refers to all instances of Pearl's work of which I am aware. "DAG theory" refers only to the fragment which a priori assumes all causal models are directed acyclic graphs. Claim 1: DAG theory can't cope with the gears example. False. For the third time, there exists an approximation of the gears example that is a directed acyclic graph. See the link in my second comment for the relevant picture. Claim 2: Pearl's causal theory can't cope with the gears example. False. If the approximation in claim 1 doesn't satisfy you, then there exists a messy, more computationally expensive extension of the DAG theory that can deal with cyclic causal graphs. Claim 3: Pearl's causal theory describes all causal systems everywhere. False. This is the only claim to which quantum mechanics is relevant.

If his theory breaks in situations as mundane and simple as the gears example above, then why have common criticisms employed the vagaries of quantum mechanics in attempting to criticize the Markov assumption? They might as well have just used simple examples involving gears.

2[anonymous]
I don't follow. You made a claim of the form "For all causal systems, this theory ought to describe them." I demonstrated otherwise by exhibiting an explicit assumption Pearl makes at the outset, and that because of this assumption the theory applies only to a subset of causal systems. Gears are classical objects, and so a simple example involving gears doesn't elucidate the weaknesses of assuming all processes are Markov. Then, I alluded to how one can hack around cycles in causal graphs by approximating them with "ladders". As far as I can tell you're assuming some narrative between these points; there isn't one.

The theory is supposed to describe ANY causal system-- otherwise it would be a crappy theory of how (normatively) people ought to reason causally, and how (descriptively) people do reason causally.

2[anonymous]
No, it's not. In particular, most of Pearl's work applies only under some sort of assumption that the underlying process is Markovian. The common criticism of Pearl is that this assumption fails if one assumes quantum mechanics is true. He addresses this in Causality, around chapter two or three. He also addresses extensions to possibly-cyclic diagrams, but the technicalities become annoying. If you are okay with discretizing time, then Timeless Causality shows a "ladder"-like directed acyclic graph that will approximate the causal system.

That philosophy itself can't be supported by empirical evidence; it rests on something else.

Right, and I'm asking you what you think that "something else" is.

I'd also re-assert my challenge to you: if philosophy's arguments don't rest on some evidence of some kind, what distinguishes it from nonsense/fiction?

-2mtraven
Hell, how would I know? Let's say "thinking" for the sake of argument. People think it makes sense. "Definitions may be given in this way of any field where a body of definite knowledge exists. But philosophy cannot be so defined. Any definition is controversial and already embodies a philosophic attitude. The only way to find out what philosophy is, is to do philosophy." -- Bertrand Russell

Unless you think the "Heideggerian critique of AI" is a good example. In which case I can engage that.

I think you are making a category error. If something makes claims about phenomena that can be proved/disproved with evidence in the world, it's science, not philosophy.

Hmm.. I suspect the phrasing "evidence/phenomena in the world" might give my assertion an overly mechanistic sound to it. I don't mean verifiable/disprovable physical/atomistic facts must be cited-- that would be begging the question. I just mean any meaningful argument must make reference to evidence that can be pointed to in support of/ in criticism of the given argument. Not... (read more)

-1mtraven
I'm not at all a fan of Hegel, and Heidegger I don't really understand, but I linked to a paper that describes the interaction of Heideggerian philosophy and AI which might answer your question. I still think you don't have your categories straight. Philosophy does not make "claims" that are proved or disproved by evidence (although there is a relatively new subfield called "experimental philosophy"). Think of it as providing alternate points of view. To illustrate: your idea that the only valid utterances are those that are supported by empirical evidence is a philosophy. That philosophy itself can't be supported by empirical evidence; it rests on something else.
0jwdink
Unless you think the "Heideggerian critique of AI" is a good example. In which case I can engage that.

Continental philosophy, on the other hand, if you can manage to make sense of it, actually >can provide new perspectives on the world, and in that sense is worthwhile. Don't assume >that just because you can't understand it, it doesn't have anything to say.

It's not that people coming from the outside don't understand the language. I'm not just frustrated the Hegel uses esoteric terms and writes poorly. (Much the same could be said of Kant, and I love Kant.) It's that, when I ask "hey, okay, if the language is just tough, but there is content ... (read more)

0mtraven
I think you are making a category error. If something makes claims about phenomena that can be proved/disproved with evidence in the world, it's science, not philosophy. So the question is whether philosophy's position as meta to science and everything else can provide utility. I've found it useful, YMMV. BTW here is the latest round of Heideggerian critique of AI (pdf) which, again, you may or may not find useful.

That's fantastic. What school was this?

If they can't stop students from using Wikipedia, pretty soon schools will be reduced from teaching how to gather facts, to teaching how to think!

This is what kind of rubs me the wrong way about the above "idea selection" point. Is the implication here that the only utility of working through Hume or Kant's original text is to cull the "correct" facts from the chaff? Seems like working through the text could be good for other reasons.

Haha, we must have very different criteria for "confusing." I found that post very clear, and I've struggled quite a bit with most of your posts. No offense meant, of course: I'm just not very versed in the LW vernacular.

0Vladimir_Nesov
My comments can be confusing, or difficult to get over the wider inferential gaps. In this case I meant that nickernst's comment could just be expressed much more clearly.

I agree generally that this is what an irrational value would mean. However, the presiding implicit assumption was that the utilitarian ends were the correct, and therefore the presiding explicit assumption (or at least, I thought it was presiding... now I can't seem to get anyone to defend it, so maybe not) was that therefore the most efficient means to these particular ends were the most rational.

Maybe I was misunderstanding the presiding assumption, though. It was just stuff like this:

Lesswrongers will be encouraged to learn that the Torchwood charact

... (read more)

I don't get you

Could you say why?

Okay, that's fine. So you'll agree that the various people--who were saying that the decision made in the show was the rational route--these people were speaking (at least somewhat) improperly?

It seems like you are seeing my replies as soldier-arguments for the object-level question about the sacrifice of children, stumped on a particular conclusion that sacrificing children is right, while I'm merely giving opinion-neutral meta-comments about the semantics of such opinions. (I'm not sure I'm reading this right.)

...so you're NOT attempting to respond to my original question? My original question was "what's irrational about not sacrificing the children?"

0Vladimir_Nesov
There is nothing intrinsically irrational about any action, rationality or irrationality depends on preference, which is the point I was trying to communicate. Any question about "rationality" of a decision is a question about correctness of preference-optimization. So, my reply to your original question is that the question is ill-posed, and the content of the reply was explanation as to why.

Wonderful post.

Because the brain is a hodge podge of dirty hacks and disconnected units, smoothing over and reinterpreting their behaviors to be part of a consistent whole is necessary to have a unified 'self'. Drescher makes a somewhat related conjecture in "Good and Real", introducing the idea of consciousness as a 'Cartesian Camcorder', a mental module which records and plays back perceptions and outputs from other parts of the brain, in a continuous stream. It's the idea of "I am not the one who thinks my thoughts, I am the one who hea

... (read more)
-1ajayjetti
I don't get you

Okay, so I'll ask again: why couldn't the humans real preference be to not sacrifice the children? Remember, you said:

You can't decide your preference, preference is not what you actually do, it is what you should do

You haven't really elucidated this. You're either pulling an ought out of nowhere, or you're saying "preference is what you should do if you want to win". In the latter case, you still haven't explained why giving up the children is winning, and not doing so is not winning.

And the link you gave doesn't help at all, since, if we're... (read more)

-3Vladimir_Nesov
It seems like you are seeing my replies as soldier-arguments for the object-level question about the sacrifice of children, stumped on a particular conclusion that sacrificing children is right, while I'm merely giving opinion-neutral meta-comments about the semantics of such opinions. (I'm not sure I'm reading this right.) Preference defines what constitutes winning, your actions rank high in the preference order if they determine the world high in preference order. Preference can't be reduced to winning or actions, as these all are the sides of the same structure.

But there seemed to be some suggestion that an avoidance of sacrificing the children, even to the risk of everyone's lives was a "less rational" value. If it's a value, it's a value... how do you call certain values invalid, or not "real" preferences?

1[anonymous]
I missed where Vladimir made that suggestion, though I'm sure others have. You can have an irrational value, if it's really a means and not an end (which is another value), but you don't recognize that, and call the means a value itself. Means to an end can of course be evaluated as rational. If anyone made the suggestion you mention, they probably presumed a single "basic" value of preserving lives, and considered the method of deciding to be a means, but denoted as a value. (Of course, a value can be both a means and an end, which presents fun new complications...)

Excellent response.

As a side note, I do suspect that there's a big functional difference between an entity that feels a small voice in the back of the head and an entity that feels pain like we do.

0AndrewH
Agreed, pain overwhelming your entire thoughts is too extreme, though understandable how it evolved this way.

How does one define "bad" without "pain" or "suffering"? Seems rather difficult. Or: The question doesn't seem so much difficult as it is (almost) tautological. It's like asking "What, if anything, is hot about atoms moving more quickly?"

Oh, it's no problem if you point me elsewhere. I should've specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I'll check them out.

All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.

Okay... so again, I'll ask... why is it irrational to NOT sacrifice the children? How does it go against hidden preference (which, perhaps, it would be prudent to define)?

2orthonormal
I understand your frustration, since we don't seem to be saying much to support our claims here. We've discussed relevant issues of metaethics quite heavily on Less Wrong, but we should be willing to enter the debate again as new readers arrive and raise their points. However, there's a lot of material that's already been said elsewhere, so I hope you'll pardon me for pointing you towards a few early posts of interest right now instead of trying to summarize it in one go. Torture vs. Dust Specks kicked off the arguing; Eliezer began arguing for his own position in Circular Altruism and The "Intuitions" Behind "Utilitarianism". Searching LW for keywords like "specks" or "utilitarian" should bring up more recent posts as well, but these three sum up more or less what I'd say in response to your question. (There's a whole metaethics sequence later on (see the whole list of Eliezer's posts from Overcoming Bias), but that's less germane to your immediate question.)

That's not a particularly helpful or elucidating response. Can you flesh out your position? It's impossible to tell what it is based on the paltry statements you've provided. Are you asserting that the "equation" or "hidden preference" is the same for all humans, or ought to be the same, and therefore is something objective/rational?

0Vladimir_Nesov
Preference of a given human is defined by their brain, and can be somewhat different from person to person, but not too much. There is nothing "objective" about this preference, but for each person there is one true preference that is their own, and same could be said for humanity as a whole, with the whole planet defining its preference, instead of just one brain. The focus on the brain isn't very accurate though, since environment plays its part as well. I can't do justice to the centuries-old problem with a few words, but the idea is more or less this. Whatever the concept of "preference" means, when the human philosophers talk about it, their words are caused by something in the world: "preference" must be either a mechanism in their brain, a name of their confusion, or something else. It's not epiphenomenal. Searching for the "ought" in the world outside human minds is more or less a guaranteed failure, especially if the answer is expected to be found explicitly, as an exemplar of perfection rather than evidence about what perfection is, to be interpreted in nontrivial way. The history of failure to find an answer while looking in the wrong place doesn't prove that the answer is nowhere to be found, that there is now positive knowledge about the absence of the answer is the world.

What would be an example of a hidden preference? The post to which you linked didn't explicitly mention that concept at all.

0Vladimir_Nesov
All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.

I suppose I'm questioning the validity of the analogy: equations are by nature descriptive, while what one ought to do is prescriptive. Are you familiar with the Is-Ought problem?

2[anonymous]
jwdink, I don't think Vladimir Nesov is making an Is-Ought error. Think of this: You have values (preferences, desired ends, emotional "impulses" or whatever) which are a physical part of your nature. Everything you decide to do, you do because you Want to. If you refuse to acknowledge any criteria for behavior as valuable to you, you're saying that what feels valuable to you isn't valuable to you. This is a contradiction! An Is-Ought problem arises when you attempt to derive a Then without an If. Here, the If is given: If you value what you value, then you should do what is right in accordance with your values.
1Vladimir_Nesov
The problem is a confusion. Human preference is something implemented in the very real human brain.

Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".

Couldn't these people care about not sacrificing autonomy, and therefore this would be a value that they're successfully fulfilling?

0Vladimir_Nesov
Yes they could care about either outcome. The question is whether they did, whether their true hidden preferences said that a given outcome is preferable.

You can't decide your preference, preference is not what you actually do, it is what you should do, and it's encoded in your decision-making capabilities in a nontrivial way, so that you aren't necessarily capable of seeing what it is.

You've lost me.

0Vladimir_Nesov
The analogy in the next paragraph was meant to clarify. Do you see the analogy? A person in this analogy is an equations together with an algorithm for approximately solving that equation. Decisions that the person makes are the approximate solutions, while preference is the exact solution hidden in the equation that the person can't solve exactly. The decision algorithm tries to make decisions as close to the exact solution as it can. The exact solution is what the person should do, while the output of the approximate algorithm is what the person actually does.

Which of the decision is (actually) the better one depends on the preferences of one who decides

So if said planet decided that its preference was to perish, rather than sacrifice children, would this be irrational?

However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculati

... (read more)
0Vladimir_Nesov
You can't decide your preference, preference is not what you actually do, it is what you should do, and it's encoded in your decision-making capabilities in a nontrivial way, so that you aren't necessarily capable of seeing what it is. Compare preference to a solution to an equation: you can see the equation, you can take it apart on the constituent terms, but its solution is nowhere to be found explicitly. Yet this solution is (say) uniquely defined by the equation, and approximate methods for solving the equation (analogized to the actual decisions) tend to give their results in the general ballpark of the exact solution.

Well then... I'd say a morality that puts the dignity of a few people (the decision makers) as having more importance than, well, the lives and well being of the majority of the human race is not a very good morality.

Okay. Would you say this statement is based on reason?

If a decision decreases utility, is it not irrational?

I don't see how you could go about proving this.

As for the trolley problem, what we are dealing with is the aftermath of the trolley problem. If you save the people on the trolley, it could be argued that you have behaved dishonourably, but what about the people you saved? Surely they are innocent of your decision. If humanity is honourably wiped out by the space monsters, is that better than having some humans behave dishonourably and others (i.e. those who favoured resistance, but were powerless

... (read more)
1eirenicon
I should have said "decreases personal utility." When I say rationality, I mean rationality. Decreasing personal utility is the opposite of "winning".

Ah, then I misunderstood. A better way of phrasing my challenge might be: it sounds like we might have different algorithms, so prove to me that your algorithm is more rational.

No one has answered this challenge.

Well, sure, when you phrase it like that. But your language begs the question: it assumes that the desire for dignity/autonomy is just an impulse/fuzzy feeling, while the desire for preserving human life is an objective good that is the proper aim for all (see my other posts above). This sounds probable to be me, but it doesn't sound obvious/ rationally derived/ etc.

I could after all, phrase it in the reverse manner. IF I assume that dignity/autonomy is objectively good:

then the question becomes "everyone preserves their objectively good dignity&qu

... (read more)
1Psy-Kosh
Well then... I'd say a morality that puts the dignity of a few people (the decision makers) as having more importance than, well, the lives and well being of the majority of the human race is not a very good morality. ie, I am claiming "it seems to be that a consequence of my morality is that..." Alternately "sure, maybe you value 'battle of honor' more than human lives, but then your values don't seem to count as something I'd call morality"

What do the space monsters deserve?

Haha, I was not factoring that in. I assumed they were evil. Perhaps that was close minded of me, though.

The first scenario is better for both space monsters and humans. Sure, in the second scenario, the humans theoretically don't lose their dignity, but what does dignity mean to the dead?

Some people would say that dying honorably is better than living dishonorably. I'm not endorsing this view, I'm just trying to figure out why it's irrational, while the utilitarian sacrifice of children is more rational.

To put i

... (read more)
-1eirenicon
If a decision decreases [personal] utility, is it not irrational? Some people would say that it is dishonourable to hand over your wallet to a crackhead with a knife. When I was actually in that situation, though (hint: not as the crackhead), I didn't think about my dignity. I just thought that refusing would be the dumbest, least rational possible decision. The only time I've ever been in a fight is when I couldn't run away. If behaving honourably is rational then being rational is a good way to get killed. I'm not saying that being rational always leads to morally satisfactory decisions. I am saying that sometimes you have to choose moral satisfaction over rationality... or the reverse. As for the trolley problem, what we are dealing with is the aftermath of the trolley problem. If you save the people on the trolley, it could be argued that you have behaved dishonourably, but what about the people you saved? Surely they are innocent of your decision. If humanity is honourably wiped out by the space monsters, is that better than having some humans behave dishonourably and others (i.e. those who favoured resistance, but were powerless to effect it) survive honourably?
0Vladimir_Nesov
Utilitarian calculation is a more rational process of arriving at a decision, while for the output of this process (a decision) for a specific question you can argue that it's inferior to the output of some other process, such as free-running deliberation or random guessing. When you are comparing the decisions of sacrifice of children and war to the death, first isn't "intrinsically utilitarian", and the second isn't "intrinsically emotional". Which of the decision is (actually) the better one depends on the preferences of one who decides, and preferences are not necessarily reflected well in actions and choices. It's instrumentally irrational for the agent to choose poorly according to its preferences. Systematic processes for decision-making allow agents to explicitly encode their preferences, and thus avoid some of the mistakes made with ad-hoc decision-making. Such systematic processes may be constructed in preference-independent fashion, and then given preferences as parameters. Utilitarian calculation is a systematic process for computing a decision in situations that are expected to break intuitive decision-making. The output of a utilitarian calculation is expected to be better than an intuitive decision, but there are situations when utilitarian calculation goes wrong. For example, the extent to which you value things could be specified incorrectly, or a transformation that computes how much you value N things based on how much you value one thing may be wrong. In other cases, the problem could be reduced to a calculation incorrectly, losing important context. However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculation outputs "war to the death" as the right decision.

Yeah, the sentiment expressed in that post is usually my instinct too.

But then again, that's the problem: it's an instinct. If my utilitarian impulse is just another impulse, then why does it automatically outweigh any other moral impulses I have, such as a value of human autonomy? If my utilitarian impulse is NOT just an impulse, but somehow is objectively more rational and outranks other moral impulses, then I have yet to see a proof of this.

2Psy-Kosh
"shut up and multiply" is, in principle, a way to weigh various considerations like the value of autonomy, etc etc etc... It's not "here's shut up and multiply" vs "some other value here", but "plug in your values + actual current situation including possible courses of action and compute" Some of us are then saying "it is our moral position that human lives are so incredibly valuable that a measure of dignity for a few doesn't outweigh the massively greater suffering/etc that would result from the implied battle that would ensue from the 'battle of honor' route"

I don't quite understand how your rhetorical question is analogous here. Can you flesh it out a bit?

I don't think the notion of dignity is completely meaningless. After all, we don't just want the maximum number of people to be happy, we also want people to get what they deserve-- in other words, we want people to deserve their happiness. If only 10% of the world were decent people, and everyone else were immoral, which scenario would seem the more morally agreeable: the scenario in which the 10% good people were ensured perennial happiness at the expense ... (read more)

2eirenicon
What do the space monsters deserve? If you factor in their happiness, it's an even more complicated problem. The space monsters need n human children to be happy. If you give them up, you have happy space monsters and (6 billion - n) happy (if not immediately, in the long term) humans. If you refuse, assuming the space monsters are unbeatable, you have happy space monsters and zero happy humans. The first scenario is better for both space monsters and humans. Sure, in the second scenario, the humans theoretically don't lose their dignity, but what does dignity mean to the dead? To put it in another light, what if this situation happened a hundred years ago? Would you be upset that the people alive at the time caved in to the aliens' demands, or would you prefer the human race had been wiped out?
1Vladimir_Nesov
See Shut up and multiply.
0Psy-Kosh
If you take an action that you know will result in a greater amount of death/suffering, just for the sake of your own personal dignity, do you actually deserve any dignity from that? ie, one can rephrase the situation as "are you so selfish as to put your own personal dignity above many many human lives?" (note, I have not watched the Torchwood episodes in question, merely going at this based on the description here.) IF fighting them or otherwise resisting is known to be futile and IF there's sufficient reason to suspect that they will keep their word on the matter, then the question becomes "just about everyone gets killed" vs "most survive, but some number of kids get taken to suffer, well, whatever the experience of being used as a drug is. (eventual death within a human lifespan? do they remain conscious long past that? etc etc etc...)" That doesn't make the second option "good", but if the choices available amount to those two options, then we need to choose one. "Everyone gets killed, but at least we get some 'warm fuzzies of dignity'" would actually seem to potentially be a highly immoral decision. Having said that..... Don't give up searching for alternatives or ways to fight the monsters-in-question that doesn't result in automatic defeat. What's said above applies to the pathological dilemma in the least convenient possible world where we assume there really are no plausible alternatives.

I'm surprised that was so downvoted too.

Perhaps I should rephrase it: I don't want to assert that it would've been objectively better for them to not give up the children. But can someone explain to me why it's MORE rational to give up in this situation?

0Aurini
I think it's my fault. I posted a... rather unpopular article about compromise. I agree with your hawk/brinksmanship analysis of the strategy. I've found in life that 'the easy way out' is usually not so easy. I'm still trying to break it down into game-theory language appropriate for this site, however.

That's horrible. They should've fought the space monsters in an all out war. Better to die like that than to give up your dignity. I'm surprised they took that route on the show.

0thomblake
I agree with SilasBarta below. Resistance may be futile but we'll give them a hell of a fight. They won't get our children even over our dead bodies.
1SilasBarta
That's not as bad an idea as your post's -3 rating suggests. First of all, what's to ensure the aliens even keep their word? (I haven't seen this episode, so I don't know how that's handled.) For all we know, this could just be their way of "trolling" us so we get into an intraspecies flamewar and thus be unprepared for their actual plans, which are to attack and take whatever living children they can. In that case, the "nuclear option" is to "kill the children before the aliens get to them" .. which ends the human race anyway. And if the human race is going to end anyway, why not take as many of them down with us as we can?
0NQbass7
How many lives is your dignity worth? Would you be willing to actually kill people for your dignity, or are you only willing to make that transaction if someone else is holding the knife?

A good example of this (I think) is The Dark Knight, with the two ferries.

1michaelkeenan
Agreed. The one that annoys me the most is in the first Spiderman movie (spoiler warning) when the Green Goblin drops Mary-Jane and a tram full of child hostages, forcing Spiderman to choose who to save. I was excited to see what his choice would be...but then he just saves everyone.

The mistake philosophers tend to make is in accepting rationalism proper, the view that our moral intuitions (assumed to be roughly correct) must be ultimately justified by some sort of rational theory that we’ve yet to discover.

The author seems to assert that this is a cultural phenomenon. I wonder if our attempts at unifying into a theory might not be instinctive, however. Would it then be so obvious that Moral Realism were false? We have an innate demand for consistency in our moral principles, that might allow us to say something like "racism i... (read more)

The example of the paralysis anosognosia rationalization is, for some reason, extremely depressing to me.

Does anyone understand why this only happens in split brain patients when their right hemisphere motivates an action? Shouldn't it happen quite often, since the right side has no way of communicating to the left side "its time to try a new theory," and the left side is the one that we'll be talking to?

If they are being called "fundamentally mental" because they interact by one set of rules with things that are mental and a different set of rules with things that are not mental, then it's not consistent with a reductionist worldview...

Is it therefore a priori logically incoherent? That's what I'm trying to understand. Would you exclude a "cartesian theatre" fundamental particle a priori?

(and it's also confused because you're not getting at how mental is different from non-mental). However, if they are being called fundamentally m

... (read more)
0byrnema
I deduce that the above case would be inconsistent with reductionism. And I think that it is logically incoherent, because I think non-reductionism is logically incoherent, because I think that reductionism is equivalent with the idea of a closed universe, which I think is logically necessary. You may disagree with any step in the chain of this reasoning. I think you guessed: I meant that there is no division between the mental and physical/mechanical. Believing that a division is a priori possible is definitely non-reductionist. If that is what Eliezer is saying, then I agree with him. To summarize, my argument is: [logic --> closed universe --> reductionism --> no division between the mental and the physical/mechanical]

what if qualions really existed, in a material way and there were physical laws describing how they were caught and accumulated by neural cells. There's absolutely no evidence for such a theory, so it's crazy, but its not logically impossible or inconsistent with reductionism, right?

Hmm... excellent point. Here I do think it begins to get fuzzy... what if these qualions fundamentally did stuff that we typically attribute to higher-level functions, such as making decisions? Could there be a "self" qualion? Could their behavior be indeterministi... (read more)

0byrnema
Where things seem to get fuzzy is where things seem to go wrong. Nevertheless, forging ahead.. If they are being called "fundamentally mental" because they interact by one set of rules with things that are mental and a different set of rules with things that are not mental, then it's not consistent with a reductionist worldview (and it's also confused because you're not getting at how mental is different from non-mental). However, if they are being called fundamentally mental because they happen to be mechanistically involved in mental mechanisms, but still interact with all quarks in one consistent way everywhere, it's logically possible. Also you asked if these qualions could be indeterministic. It doesn't matter if you apply this question to a hypothesized new particle. The question is, is indeterminism possible in a closed system? If so, we could postulate quarks as a source of indeterminism.

Reductionism, as I understand it, is the idea that the higher levels are completely explained by (are completely determined by) the lower levels. Any fundamentally new type of particle found would just be added to what we consider "lower level".

Oh! Certainly. But this doesn't seem to exclude "mind", or some element thereof, from being irreducible-- which is what Eliezer was trying to argue, right? He's trying to support reductionism, and this seems to include an attack on "fundamentally mental" entities. Based on what you'r... (read more)

1byrnema
In what way would these "feelions" or "qualions" not be materials? Your answer to this question may reveal some interesting hidden assumptions. Are you sure it's weird because it's not reductionist? Or because such a theory would never be seen outside of a metaphysical theory? So you automatically link the idea that minds are special because they have "qualions" with "metaphysical nonsense". But what if qualions really existed, in a material way and there were physical laws describing how they were caught and accumulated by neural cells. There's absolutely no evidence for such a theory, so it's crazy, but its not logically impossible or inconsistent with reductionism, right?

QM possesses some fundamental level of complexity, but I wouldn't agree in this context that it's "fundamentally complicated".

I see what you mean. It's certainly a good distinction to make, even if it's difficult to articulate. Again, though, I think it's Occam's Razor and induction that makes us prefer the simpler entities-- they aren't the sole inhabitants of the territory by default.

Indeed, an irreducible entity (albeit with describable, predictable, behavior) is not much better than a miracle. This is why Occam's Razor, insisting that our model of the world should not postulate needless entities, insists that everything should be reduced to one type of stuff if possible. But the "if possible" is key: we verify through inference and induction whether or not it's reasonable to think we'll be able to reduce everything, not through a priori logic.

That said, I wonder if the claim can't be near-equivalently rephrased "it's impossible to imagine a non-reductionist scenario without populating it with your own arbitrary fictions".

Ah, that's very interesting. Now we're getting somewhere.

I don't think it has to be arbitrary. Couldn't the following scenario be the case?:

The universe is full of entities that experiments show reducible to fundamental elements with laws (say, quarks), or entities that induction + parsimony tells us ought to be reducible to fundamental elements (since these entiti... (read more)

1byrnema
I think we might separate the ideas that there's only one type of particle and that the world is reductionist. It is an open question as to whether everything can be reduced to a single fundamental thing (like strings) and it wouldn't be a logical impossibility to discover that there were two or three kinds of things interacting. (Or would it?) Reductionism, as I understand it, is the idea that the higher levels are completely explained by (are completely determined by) the lower levels. Any fundamentally new type of particle found would just be added to what we consider "lower level". So what does it say about the world that it is reductionist? I propose the following two things are being asserted: (1) There's no rule that operates at an intermediate level that doesn't also operate on the lower levels. This means that you can't start adding new rules when a certain level of organization is reached. For example, if you have a law that objects with mass behave a certain way, you can't apply it to everything that has mass but not quarks. This is a consistency rule. (2) Any rule that applies to an intermediate level is reducible to rules that can be expressed with and applied at the lower level. For example, we have the rule that two competing organisms cannot coexist in the same niche. Even though it would be very difficult to demonstrate, a reductionist worldview argues that in principle this rule can be derived from the rules we already apply to quarks. When people argue about reductionism, they are usually arguing about (2). They have some idea that at a certain level of organization, new rules can come into play that simply aren't expressible in the lower levels -- they're totally new rules. Here's a thought experiment about an apple that helped me sort through these ideas: Suppose that I have two objects, one in my right hand and one in my left hand. The one in my left hand is an apple. The one in my right hand has exactly the same quarks in exactly the sa

Of course it's technically possible that the territory will play a game of supernatural and support a fundamental object behaving according to a high-level concept in your mind. But this is improbable to an extent of being impossible, a priori, without need for further experiments to drive the certainty to absolute.

Not quite sure what you're saying here. If you're saying:

1)"Entities in the map will not magically jump into the territory," Then I never disagreed with this. What I disagreed with is your labeling certain things as obviously in the... (read more)

To loqi and Nesov:

Again, both of your responses seem to hinge on the fact that my challenge below is easily answerable, and has already been answered:

Tell me the obvious, a priori logically necessary criteria for a person to distinguish between "entities within the territory" and "high-level concepts." If you can't give any, then this is a big problem: you don't know that the higher level entities aren't within the territory. They could be within the territory, or they could be "computational abstractions." Either position i

... (read more)
1loqi
I don't know. I wasn't supporting the main thread of argument, I was responding specifically to your implicit comparison of the complexity of quarks and "about-ness", and pointing out that the complexity of the latter (assuming it's well-defined) is orders of magnitude higher than that of the former. "About-ness" may seem simpler to you if you think about it in terms that hide the complexity, but it's there. A similar trick is possible with QM... everything is just waves. QM possesses some fundamental level of complexity, but I wouldn't agree in this context that it's "fundamentally complicated".

This doesn't really answer the question, though. I know that a priori means "prior to experience", but what does this consist of? Originally, for something to be "a priori illogical", it was supposed to mean that it couldn't be thought without contradicting oneself, because of pre-experiential rules of thought. An example would be two straight lines on a flat surface forming a bounded figure-- it's not just wrong, but inconceivable. As far as I can tell, an irreducible entity doesn't possess this inconceivability, so I'm trying to figur... (read more)

Load More