Vaniver comments on Conceptual Analysis and Moral Theory - Less Wrong

60 Post author: lukeprog 16 May 2011 06:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (456)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 18 November 2014 07:10:08PM 2 points [-]

"Crystalising" you team clarifying, or defending.

I mean clarifying. I use that term because some people look at the Sequences and say "but that's all just common sense!". In some ways it is, but in other ways a major contribution of the Sequences is to not just let people recognize that sort of common sense but reproduce it.

I understand that clarification and defense are closely linked, and am trying to separate intentionality more than I am methodology.

Another disanalogy between philosophy and religion is that philosophy is less holistic, working more at the claim level

I consider 'stoicism' to be a 'philosophy,' but I notice that Stoics are not particularly interested in debating the finer points of abstractions, and might even consider doing so dangerous to their serenity relative to other activities. A particularly Stoic activity is negative visualization- the practice of imagining something precious being destroyed, to lessen one's anxiety about its impermanence through deliberate acceptance, and to increase one's appreciation of its continued existence.

One could see this as an unconnected claim put forth by Stoics that can be evaluated on its own merits (we could give a grant to a psychologist to test whether or not negative visualization actually works), but it seems to me that it is obvious that in the universe where negative visualization works, Stoics would notice and either copy the practice from its inventors or invent it themselves, because Stoicism is fundamentally about reducing anxiety and achieving serenity, and this seems amenable to a holistic characterization. (The psychologist might find that negative visualization works differently for Stoics than non-Stoics, and might actually only be a good idea for Stoics.)

Comment author: TheAncientGeek 18 November 2014 08:23:29PM 1 point [-]

Your example of "a philosophy" is pretty much a religion. by current standard. By philosophy I meant the sort of thing typified by current anglophone philosophy.

Comment author: Toggle 18 November 2014 08:36:36PM *  3 points [-]

That may be the disjunction. Current anglophone philosophy is basically the construction of an abstract system of thought, valued for internal rigor and elegance but largely an intellectual exercise. Ancient Greek philosophies were eudaimonic- instrumental constructions designed to promote happiness. Their schools of thought, literal schools where one could go, were social communities oriented around that goal. The sequences are much more similar to the latter ('rationalists win' + meetups), although probably better phrased as utilitarian rather than eudaimonic. Yudkowsky and Sartre are basically not even playing the same game.

Comment author: TheAncientGeek 18 November 2014 08:50:54PM -2 points [-]

I'm delighted to hear that Clippie and Newcombs box are real-world, happiness promoting issues!

Comment author: Nornagest 18 November 2014 09:30:42PM *  3 points [-]

Clippy is pretty speculative, but analogies to Newcomb's problem come up in real-world decision-making all the time; it's a dramatization of a certain class of problem arising from decision-making between agents with models of each other's probable behavior (read: people that know each other), much like how the Prisoner's Dilemma is a dramatization of a certain type of coordination problem. It doesn't have to literally involve near-omniscient aliens handing out money in opaque boxes.

Comment author: Lumifer 18 November 2014 09:39:39PM 0 points [-]

it's a dramatization of a certain class of problem arising from decision-making between agents with models of each other's probable behavior

Does it? It seems to me that once Omega stops being omniscient and becomes, basically, your peer in the universe, there is no argument not to two-box in Newcomb's problem.

Comment author: MarkusRamikin 18 November 2014 10:03:15PM *  4 points [-]

Seems to me like you only transformed one side of the equation, so to speak. Reallife Newcomblike problems don't involve Omega, but they also don't (mainly) involve highly contrived thought-experiment-like choices regarding which we are not prepared to model each other.

Comment author: Lumifer 18 November 2014 10:23:21PM 0 points [-]

That seems to me to expand the Newcomb's Problem greatly -- in particular, into the area where you know you'll meet Omega and can prepare by modifying your internal state. I don't want to argue definitions, but my understanding of the Newcomb's Problem is much narrower. To quote Wikipedia,

By the time the game begins, and the player is called upon to choose which boxes to take, the prediction has already been made, and the contents of box B have already been determined.

and that's clearly not the situation of Joe and Kate.

Comment author: dxu 19 November 2014 02:23:30AM *  2 points [-]

Perhaps, but it is my understanding that an agent who is programmed to avoid reflective inconsistency would find the two situations equivalent. Is there something I'm missing here?

Comment author: Lumifer 19 November 2014 02:41:09AM -2 points [-]

I don't know what "an agent who is programmed to avoid reflective inconsistency" would do. I am not one and I think no human is.

Comment author: TheOtherDave 18 November 2014 11:09:17PM 2 points [-]

What, on your view, is the argument for not two-boxing with an omniscient Omega?
How does that argument change with a non-omniscient but skilled predictor?

Comment author: Lumifer 19 November 2014 02:24:11AM 0 points [-]

If Omega is omniscient the two actions (one- and two-boxing) each have a certain outcome with the probability of 1. So you just pick the better outcome. If Omega is just a skilled predictor, there is no certain outcome so you two-box.

Comment author: wedrifid 19 November 2014 02:49:20AM *  2 points [-]

If Omega is just a skilled predictor, there is no certain outcome so you two-box.

Unless you like money and can multiply, in which case you one box and end up (almost but not quite certainly) richer.

Comment author: dxu 19 November 2014 02:30:13AM 2 points [-]

You are facing a modified version of Newcomb's Problem, which is identical to standard Newcomb except that Omega now has 99% predictive accuracy instead of ~100%. Do you one-box or two-box?

Comment author: Nornagest 18 November 2014 09:47:52PM *  1 point [-]

Think of the situation in the last round of an iterated Prisoner's Dilemma with known bounds. Because of the variety of agents you might be dealing with, the payoffs there aren't strictly Newcomblike, but they're closely related; there's a large class of opposing strategies (assuming reasonably bright agents with some level of insight into your behavior, e.g. if you are a software agent and your opponent has access to your source code) which will cooperate if they model you as likely to cooperate (but, perhaps, don't model you as a CooperateBot) and defect otherwise. If you know you're dealing with an agent like that, then defection can be thought of as analogous to two-boxing in Newcomb.

Comment author: Vaniver 19 November 2014 03:07:33PM *  1 point [-]

By philosophy I meant the sort of thing typified by current anglophone philosophy.

You may note several posts ago that I noticed the word 'philosophy' was not useful and tried to substitute it with other, less loaded, terms in order to more effectively communicate my meaning. This is a specific useful technique with multiple subcomponents (noticing that it's necessary, deciding how to separate the concepts, deciding how to communicate the separation), that I've gotten better at because of time spent here.

Yes, comparative perspectives is much more about claims and much less about holism than any individual perspective- but for a person, the point of comparing perspectives is to choose one whereas for a professional arguer the point of comparing perspectives is to be able to argue more winningly, and so the approaches and paths they take will look rather different.

Comment author: TheAncientGeek 19 November 2014 04:15:31PM *  -1 points [-]

Professionals are quite capable of passionately backing a particular view. If amateurs are uninterested in arguing - your claim, not mine - that means they are uninterested in truth seeking. People who adopt beliefs they can't defend are adopting beliefs as clothing