TheAncientGeek comments on Conceptual Analysis and Moral Theory - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (456)
"Crystalising" you team clarifying, or defending.
Communicating the content of a claim is of llimited use, unless you can make it persuasive. That in turn, requires defending it against alternatives. So the function you are trying to separate are actually very interconnected.
(Another disanalogy between philosophy and religion is that philosophy is less holistic, working more at the claim level)
I mean clarifying. I use that term because some people look at the Sequences and say "but that's all just common sense!". In some ways it is, but in other ways a major contribution of the Sequences is to not just let people recognize that sort of common sense but reproduce it.
I understand that clarification and defense are closely linked, and am trying to separate intentionality more than I am methodology.
I consider 'stoicism' to be a 'philosophy,' but I notice that Stoics are not particularly interested in debating the finer points of abstractions, and might even consider doing so dangerous to their serenity relative to other activities. A particularly Stoic activity is negative visualization- the practice of imagining something precious being destroyed, to lessen one's anxiety about its impermanence through deliberate acceptance, and to increase one's appreciation of its continued existence.
One could see this as an unconnected claim put forth by Stoics that can be evaluated on its own merits (we could give a grant to a psychologist to test whether or not negative visualization actually works), but it seems to me that it is obvious that in the universe where negative visualization works, Stoics would notice and either copy the practice from its inventors or invent it themselves, because Stoicism is fundamentally about reducing anxiety and achieving serenity, and this seems amenable to a holistic characterization. (The psychologist might find that negative visualization works differently for Stoics than non-Stoics, and might actually only be a good idea for Stoics.)
Your example of "a philosophy" is pretty much a religion. by current standard. By philosophy I meant the sort of thing typified by current anglophone philosophy.
That may be the disjunction. Current anglophone philosophy is basically the construction of an abstract system of thought, valued for internal rigor and elegance but largely an intellectual exercise. Ancient Greek philosophies were eudaimonic- instrumental constructions designed to promote happiness. Their schools of thought, literal schools where one could go, were social communities oriented around that goal. The sequences are much more similar to the latter ('rationalists win' + meetups), although probably better phrased as utilitarian rather than eudaimonic. Yudkowsky and Sartre are basically not even playing the same game.
I'm delighted to hear that Clippie and Newcombs box are real-world, happiness promoting issues!
Clippy is pretty speculative, but analogies to Newcomb's problem come up in real-world decision-making all the time; it's a dramatization of a certain class of problem arising from decision-making between agents with models of each other's probable behavior (read: people that know each other), much like how the Prisoner's Dilemma is a dramatization of a certain type of coordination problem. It doesn't have to literally involve near-omniscient aliens handing out money in opaque boxes.
Does it? It seems to me that once Omega stops being omniscient and becomes, basically, your peer in the universe, there is no argument not to two-box in Newcomb's problem.
Seems to me like you only transformed one side of the equation, so to speak. Reallife Newcomblike problems don't involve Omega, but they also don't (mainly) involve highly contrived thought-experiment-like choices regarding which we are not prepared to model each other.
That seems to me to expand the Newcomb's Problem greatly -- in particular, into the area where you know you'll meet Omega and can prepare by modifying your internal state. I don't want to argue definitions, but my understanding of the Newcomb's Problem is much narrower. To quote Wikipedia,
and that's clearly not the situation of Joe and Kate.
Perhaps, but it is my understanding that an agent who is programmed to avoid reflective inconsistency would find the two situations equivalent. Is there something I'm missing here?
What, on your view, is the argument for not two-boxing with an omniscient Omega?
How does that argument change with a non-omniscient but skilled predictor?
If Omega is omniscient the two actions (one- and two-boxing) each have a certain outcome with the probability of 1. So you just pick the better outcome. If Omega is just a skilled predictor, there is no certain outcome so you two-box.
Unless you like money and can multiply, in which case you one box and end up (almost but not quite certainly) richer.
You are facing a modified version of Newcomb's Problem, which is identical to standard Newcomb except that Omega now has 99% predictive accuracy instead of ~100%. Do you one-box or two-box?
Think of the situation in the last round of an iterated Prisoner's Dilemma with known bounds. Because of the variety of agents you might be dealing with, the payoffs there aren't strictly Newcomblike, but they're closely related; there's a large class of opposing strategies (assuming reasonably bright agents with some level of insight into your behavior, e.g. if you are a software agent and your opponent has access to your source code) which will cooperate if they model you as likely to cooperate (but, perhaps, don't model you as a CooperateBot) and defect otherwise. If you know you're dealing with an agent like that, then defection can be thought of as analogous to two-boxing in Newcomb.
You may note several posts ago that I noticed the word 'philosophy' was not useful and tried to substitute it with other, less loaded, terms in order to more effectively communicate my meaning. This is a specific useful technique with multiple subcomponents (noticing that it's necessary, deciding how to separate the concepts, deciding how to communicate the separation), that I've gotten better at because of time spent here.
Yes, comparative perspectives is much more about claims and much less about holism than any individual perspective- but for a person, the point of comparing perspectives is to choose one whereas for a professional arguer the point of comparing perspectives is to be able to argue more winningly, and so the approaches and paths they take will look rather different.
Professionals are quite capable of passionately backing a particular view. If amateurs are uninterested in arguing - your claim, not mine - that means they are uninterested in truth seeking. People who adopt beliefs they can't defend are adopting beliefs as clothing