Comment author: JoshuaZ 27 September 2011 02:58:19AM *  3 points [-]

I'm not completely sure what you are trying to say. I agree they could potentially evolve such an attitude if the selection pressure was high enough.

But evolution doesn't work like a chess player. Evolution does what works in the short term, blindly having the most successful alleles push forward to the next generation. If there were a chess analogy, evolution would be like a massive chess board with millions of players and each player making whatever move looks best at a quick glance, and then there are a few hundred thousand players who just move randomly.

Comment author: markrkrebs 27 September 2011 10:17:16AM 1 point [-]

Good point.. Easy to imagine a lot of biologically good designs getting left unexpressed because the first move is less optimal.

Comment author: wedrifid 27 September 2011 02:51:59AM *  5 points [-]

I'll dare in this context to suggest that evolution IS intelligence.

That's a waste of a word. Call evolution an optimisation process (which is only a slight stretch). Then you can use the word 'intelligence' to refer to what you refer to as 'meta-intelligence'. Keeping distinct concepts distinct while also acknowledging the relationship tends to be the best policy.

Have you heard of thought as an act of simulating action and forecasting the results? Is that not what evolution does,

No, it really isn't and using that model encourages bad predictions about the evolution of a species. Species don't 'forecast and select'. Species evolve to extinction with as much enthusiasm as they evolve to new heights of adaptive performance. Saying that evolution 'learns from the past' would be slightly less of an error but I wouldn't even go there.

Comment author: markrkrebs 27 September 2011 10:12:54AM 0 points [-]

Hmm, I agree, except for the last part. Blindly trying (what genetic mixing & mutating does) it like poorly guided forecasting. (Good simulation engineers or chess players somehow "see" the space of likely moves, bad ones just try a lot) and the species doesn't select, but the environment does.

I need to go read "evolve to extinction."

Thanks

Comment author: Desrtopa 02 September 2011 01:39:15PM 3 points [-]

It might be better if doctors could make hard choices like this and keep it absolutely secret, but it's nearly impossible to contrive. As long as most people strongly disapprove of that sort of action, and people who want to become doctors do not have overwhelmingly different inclinations, and the training itself does not explicitly advocate for that sort of action, the vast majority of doctors will not take the organ harvester side in the dilemma, even in circumstances where they think they can get away with it (which will be rare,) and those are the basic minimum requirements to pull it off without people guessing.

A society where the public didn't mind doctors harvesting the few to save the many would probably be considerably better off, but that would require the thoughts and actions of the entire society to be different, not just the doctors within it.

Following consequentialist ethics doesn't mean that you should behave as you would in the highest possible utility world, if that doesn't increase utility in the world in which you actually find yourself.

Comment author: markrkrebs 27 September 2011 02:25:14AM 0 points [-]

The world we find ourselves in would never expect the doctor to cut the guy up. Few people are doing that consequentialist math. Well, maybe a few long thinkers on this site. So, the supposed long view as reason for not doing it is baloney. I think on that basis alone the experiment fails to come up recommending the conventional behavior it's trying to rationalize.

Comment author: JoshuaZ 04 September 2011 12:39:40AM 6 points [-]

Though mind you, even against animals, vengeance is rather useful; because even animals can model humans to some extent. The wolves in The Jungle Book learned to "seven times never kill Man", after learning that to hurt one man, means many other men with guns coming to kill wolves in return.

Beware fictional evidence. I suspect that wolves might be smart enough in individual cases to recognize humans are a big nasty threat they don't want to mess with. But that makes sense in a context without any understanding of vengeance.

Comment author: markrkrebs 27 September 2011 02:21:47AM -1 points [-]

Well, they could EVOLVE that reticence for perfectly good reasons. I'll dare in this context to suggest that evolution IS intelligence. Have you heard of thought as an act of simulating action and forecasting the results? Is that not what evolution does, only the simulations are real, and the best chess moves "selected?"

a species thereby exhibits meta-intelligence, no?

Comment author: Danneau 04 September 2011 10:28:58PM 1 point [-]

I put in the different degrees of injury to set the context for the doctor's choice... maybe it takes 5 times as long to save the severely injured person. I didn't mean to imply that the severity of the injury affects the moral calculation.

You're right, this is like the trolley problem. When all 6 people are anonymous, we do the calculation and kill 1 to save 5. When the trolley problem is framed as "push the fat man off the bridge", that's enough personalization to trigger the other part of the brain.

Moral philosophy in general tries to find universal principles whose logical consequences agree with our moral intuition. The OP is saying that we can fix consequentialism by making the moral calculations more complicated. Good luck with that! If moral intuition comes from two different parts of the brain that don't agree with each other, then we can always construct moral dilemmas by framing situations so that they activate one part or another of our brains.

Comment author: markrkrebs 27 September 2011 02:17:02AM 0 points [-]

"philosophy tries... to agree with our ...intuition..."? Bravo! See, I think that's crazy. Or if it's right, it means we're stipulating the intuition in the first place. Surely that's wrong? Or at least, we can look back in time to see "obvious" moral postulates we no longer agree with. In science we come up with a theory and then test it in the wind tunnel or something. In philosophy, is our reference standard kilogram just an intuition? That's unsatisfying!

Comment author: Danneau 04 September 2011 10:28:58PM 1 point [-]

I put in the different degrees of injury to set the context for the doctor's choice... maybe it takes 5 times as long to save the severely injured person. I didn't mean to imply that the severity of the injury affects the moral calculation.

You're right, this is like the trolley problem. When all 6 people are anonymous, we do the calculation and kill 1 to save 5. When the trolley problem is framed as "push the fat man off the bridge", that's enough personalization to trigger the other part of the brain.

Moral philosophy in general tries to find universal principles whose logical consequences agree with our moral intuition. The OP is saying that we can fix consequentialism by making the moral calculations more complicated. Good luck with that! If moral intuition comes from two different parts of the brain that don't agree with each other, then we can always construct moral dilemmas by framing situations so that they activate one part or another of our brains.

Comment author: markrkrebs 27 September 2011 02:14:25AM 0 points [-]

I had fun with friends recently considering the trolley problem from a perspective of INaction. When it was an act of volition, even (say) just a warning shout, they (we) felt less compelled to let the fat man live. (He was already on the track and would have to be warned off, get it?) It seems we are responsible for what we do, not so much for what we elect NOT to do. Since the consequences are the same, it seems wrong that there is a perceptive difference. This highlights, I suppose the author's presumed contention (consequentialism generally) that the correct ethical choice is obviously one of carefully (perhaps expensively!) calculated long term outcomes and equal to what feels right only coincidentally. I think in the limit, we would (consequentialists all) just walk into the hospital and ask for vivisection, since we'd save 5 lives. The reason I don't isn't JUST altruism, because I wouldn't ask you to either, instead it's a step closer to Kant's absolutism: as humans we're worth something more than ants (who I submit are all consequentialists?) and have individual value. I need to work on expressing this better...

Comment author: markrkrebs 27 September 2011 01:37:13AM 1 point [-]

Your doctor with 5 organs strikes me as Vizzini's princess bride dilemma, "I am not a great fool, so I can clearly not choose the wine in front of you."

So it goes, calculating I know you know I know unto silliness. Consequentialists I've recently heard lecturing went to great lengths, as you did, to rationalize what they 'knew" to be right. Can you deny it? The GOAL of the example was to show that "right thinking" consequentialists would come up with the same thing all our reptile brains are telling us to do.

When you throw a ball, your cerebral cortex doesn't do sums to figure where it will land. Primitive analog calculation does it fast and with reasonable accuracy. As we all know, doctors across the nation don't do your TDL sums either. Nor do I think they're internalized the results unconsciously either. They have an explicit moral code which in it's simple statements would disagree.

The thing I find interesting, the challenge I'd like to suggest, is whether consequentialism is somewhat bankrupt in that it is bending over backwards to "prove" things we all seem to know, instead of daring to prove something less obvious (or perhaps unknown / controversial). If you can make a NEW moral statement, and argue to make it stick, well that's like finding a new particle of matter or something: quite valuable.

In response to Belief in Belief
Comment author: markrkrebs 22 September 2011 08:25:01PM 1 point [-]

Surprised not to find Pascal's wager linked to this discussion since he faced the same crisis of belief. It's well known he chose to believe because of the enormous (inf?) rewards if that turned out to be right, so he was arguably hedging his bets.

It's less well known that he understood it (coerced belief for expediency's sake) to be something that would be obvious to omniscient God, so it wasn't enough to choose to believe, but rather he actually Had To. To this end he hoped that practice would make perfect and I think died worrying about it. this is described in the Wikipedia article in an evasive third person, but a philosophy podcast I heard attributed the dilemma of insincere belief to Pascal directly.

Fun stuff.

Comment author: markrkrebs 07 March 2010 01:13:20PM *  2 points [-]

For reasons I perhaps don't fully understand this, and threads like it are unsettling to me. Doesn't high status confer the ability (and possibly duty, in some contexts) to treat others better, to carry their pack so to speak? Further, acting high status isn't necessary at all if you actually have it (it being the underlying competence status (supposedly, ideally) signifies. I am a high status athlete (in a tiny, circumscribed world) and in social situations try to signify humility, so others won't feel bad. They can't keep up, and if made to feel so, will not want to come again. Maybe in this forum we just want to drop anyone who can't keep the pace. If I see someone acting supercilious/indifferent, signaling status on all frequencies, I will infer you have something to hide, or strong feelings of incompetence that need to be stroked. Now we can play the game of you know I know high status signallers may be compensating, but it's a silly game, because faking status, if that's what you want, is only a temporary fiction. Any close relationship will soon scrape through that whitewash. Unfortunately, (I think) poseurs do manage to get by quite well in the world, by exactly the techniques being discussed here. Maybe everybody should get a tattoo with their VO2max and IQ right on their forehead?

Comment author: byrnema 06 March 2010 08:29:12PM *  3 points [-]

I really am grateful for JGWeissman for helping me click on the fact that light isn't something that obeys the wave described by the Maxwell equation, but is that wave. The difference is imagining light as a type of substance compelled to oscillate with the wave pattern, and there being a wave pattern, resulting naturally from causal interactions, that is interpreted by our vision as "light".

Thus this is the explanation I would give my past self for strikethrough(why light oscillates) what light is:

A charge creates an electromagnetic field. If the charge moves, the electromagnetic field will have to change. However, while the field is defined over infinite space, the field cannot update instantaneously over all of space. Instead, the field updates at the speed of light from the new position of the charge. At a small, fixed moment in time after the point charge has moved, the field has updated within a sphere of a certain radius, but has not yet updated outside this radius. What we call 'light' is the defect radiating outward though space like a ripple. When our eyes intercept this defect, we gain information about the point charge's displacement and -- in some way I don't understand, and don't need to for the immediate explanation -- the field no longer needs to keep updating and the ripple stops propagating (the waves collapses to an intercepted particle / photon).

So I no longer see light as a thing traveling though space, but as information about an updated field traveling in finite time.

Does this make sense? I suppose it could be completely wrong, but it is what I mean by a 'mechanical' explanation.

Oh, and I'll add that light oscillates because the electric and magnetic fields update each other in finite time, and there is a slight lag, so that the wave has an amplitude. I see this as analogous to predator-prey oscillations in a Lotka-Volterra model; if the fields responded instantaneously there would be no oscillation.

Comment author: markrkrebs 07 March 2010 12:13:46AM 0 points [-]

Most excellent. Now, glasshoppah, you are ready to lift the bowl of very hot red coals. Try this

View more: Next