Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

TheAncientGeek comments on On the importance of Less Wrong, or another single conversational locus - Less Wrong

84 Post author: AnnaSalamon 27 November 2016 05:13PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (362)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 28 November 2016 02:05:23PM 3 points [-]

"debating philosophy

As opposed to what? Memorising the One true Philosophy?

Comment author: Vaniver 28 November 2016 05:07:44PM 5 points [-]

As opposed to what? Memorising the One true Philosophy?

The quotes signify that they're using that specifically as a label; in context, it looks like they're pointing to the failure mode of preferring arguments as verbal performance to arguments as issue resolution mechanism. There's a sort of philosophy that wants to endlessly hash out the big questions, and there's another sort of philosophy that wants to reduce them to empirical tests and formal models, and we lean towards the second sort of philosophy.

Comment author: TheAncientGeek 28 November 2016 06:16:14PM 2 points [-]

How many problems has the second sort solved?

Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?

Comment author: Vaniver 28 November 2016 08:04:10PM *  5 points [-]

How many problems has the second sort solved?

Too many for me to quickly count?

Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?

Yes. It seems to me that both of those factors drive discussions, and most conversations about philosophical problems can be easily classified as mostly driven by one or the other, and that it makes sense to separate out conversations where the difficulty is natural or manufactured.

I think a fairly large part of the difference between LWers and similarly intelligent people elsewhere is the sense that it is possible to differentiate conversations based on the underlying factors, and that it isn't always useful to manufacture difficulty as an opportunity to display intelligence.

Comment author: Kaj_Sotala 29 November 2016 10:44:47AM 2 points [-]

Too many for me to quickly count?

Name three, then. :)

Comment author: Vaniver 29 November 2016 04:18:16PM 3 points [-]

What I have in mind there is basically 'approaching philosophy like a scientist', and so under some views you could chalk up most scientific discoveries there. But focusing on things that seem more 'philosophical' than not:

How to determine causality from observational data; where the perception that humans have free will comes from; where human moral intuitions come from.

Comment author: TheAncientGeek 04 December 2016 01:01:10PM *  2 points [-]

Approaching philosophy as science is not new. It has had a few spectacular successes, such as the wholesale transfer of cosmology from science to philosophy, and a lot of failures, judging by the long list of unanswered philosophical questions (about 200, according to wikipedia). It also has the special pitfall of philosophically uninformed scientists answering the wrong question:-

How to determine causality from observational data;

What causality is is the correct question/.

where the perception that humans have free will comes from;

Whether humans have the power of free will is the correct question.

where human moral intuitions come from.

Whether human moral intuitions are correct is the correct question.

Comment author: Vaniver 04 December 2016 09:46:56PM 2 points [-]

What causality is is the correct question/.

Oh, if you count that one as a question, then let's call that one solved too.

Whether humans have the power of free will is the correct question.

Disagree; I think this is what it looks like to get the question of where the perception comes from wrong.

Whether human moral intuitions are correct is the correct question.

Disagree for roughly the same reason; the question of where the word "correct" comes from in this statement seems like the actual query, and is part of the broader question of where human moral intuitions come from.

Comment author: TheAncientGeek 05 December 2016 07:34:00PM *  1 point [-]

What causality is is the correct question/.

Oh, if you count that one as a question, then let's call that one solved too.

Solved where?

Whether humans have the power of free will is the correct question.

Disagree; I think this is what it looks like to get the question of where the perception comes from wrong.

How can philosophers be systematically wrong about the nature of their questions? And what makes you right?

Of course, inasmuch as you agree with Y., you are going to agree that the only question to be answered is where the perception comes for, but this is about truth, not opinion: the important point is that he never demonstrated that.

Whether human moral intuitions are correct is the correct question.

Disagree for roughly the same reason; the question of where the word "correct" comes from in this statement seems like the actual query, and is part of the broader question of where human moral intuitions come from.

if moral intuitions come from God, that might underpin correctness, but things are much less straightforward in naturalistic explanations.

Comment author: Vaniver 15 December 2016 01:29:01AM *  3 points [-]

Solved where?

On one level, by the study of dynamical systems and the invention of differential equations.

On a level closer to what you meant when you asked the question, most of the confusing things about 'causality' are actually confusing things about the way our high-level models of the world interact with the world itself.

The problem of free will is a useful example of this. People draw this picture that looks like [universe] -> [me] -> [my future actions], and get confused, because it looks like either determinism (the idea that [universe] -> [my future actions] ) isn't correct or the intuitive sense that I can meaningfully choose my future actions (the idea that [me] -> [my future actions] ) isn't correct.

But the actual picture is something like [universe: [me] -> [my future actions] ]. That is, I am a higher-level concept in the universe, and my future actions are a higher-level concept in the universe, and the relationship between the two of them is also a higher-level concept in the universe. Both determinism and the intuitive sense that I can meaningfully choose my future actions are correct, and there isn't a real conflict between them. (The intuitive sense mostly comes from the fact that the higher level concept is a lossy compression mechanism; if I had perfect self-knowledge, I wouldn't have any uncertainty about my future actions, but I don't have perfect self-knowledge. It also comes from the relative importance of decision-making as a 'natural concept' in the whole 'being a human' business.)

And so when philosophers ask questions like "When the cue ball knocks the nine ball into the corner pocket, what are the terms of this causal relation?" (from SEP), it seems to me like what they're mostly doing is getting confused about the various levels of their models, and mistaking properties of their models for properties of the territory.

That is, in the territory, the wavefunction of the universe updates according to dynamical equations, and that's that. It's only by going to higher level models that things like 'cause' and 'effect' start to become meaningful, and different modeling choices lead to different forms of cause and effect.

Now, there's an underlying question of how my map came to believe the statement about the territory that begins the previous paragraph, and that is indeed an interesting question with a long answer. There are also lots of subtle points, about stuff like that it's interesting that we don't really need an idea of counterfactuals to describe the universe and the dynamical equations but we do need an idea of counterfactuals to describe higher-level models of the universe that involve causality. But as far as I can tell, you don't get the main point right by talking about causal relata and you don't get much out of talking about the subtle points until you get the main point right.

To elaborate a bit on that, hopefully in a way that makes it somewhat clearer why I find it aggravating or difficult to talk about why my approach on philosophy is better, typically I see a crisp and correct model that, if accepted, obsoletes other claims almost accidentally. If you accept the [universe: [me] -> [my future actions] ] model of free will, for example, then nearly everything written about why determinism is correct / incorrect or free will exists / doesn't exists is just missing the point and is implicitly addressed by getting the point right, and explicitly addressing it looks like repeating the point over and over again.

This is also where the sense that they're wrong about questions is coming from; compare to Babbage being surprised when a MP asked if his calculator would give the right output if given the wrong inputs. If they're asking X, then something else is going wrong upstream, and fixing that seems better than answering that question.

Comment author: WalterL 01 December 2016 07:44:23PM 0 points [-]

Scientists don't approach philosophy though, they run screaming in the other dimension.

The Scientific Method doesn't work on untestable stuff.

Comment author: MugaSofer 29 November 2016 12:02:42PM *  3 points [-]

Off the top of my head: Fermat's Last Theorem, whether slavery is licit in the United States of America, and the origin of species.

Comment author: TheAncientGeek 29 November 2016 01:59:44PM -2 points [-]

Is that a joke?

Comment author: TheAncientGeek 29 November 2016 03:16:48PM 1 point [-]

to Too many for me to quickly count?

The last time I counted I came up with two and a half.

Comment author: eagain 23 January 2017 08:25:07PM 0 points [-]

Have you considered that there may be a lot of endless hashing out, not because some people have a preference for it, but because the problems are genuinely difficult?

I've considered that view and found it wanting, personally. Not every problem can be solved right now with an empirical test or a formal model. However, most that can be solved right now, can be solved in such a way, and most that can't be solved in such a way right now, can't be solved at all right now. Adding more "hashing out of big questions" doesn't seem to actually help; it just results in someone eventually going meta and questioning whether philosophy is even meant to make progress towards truth and understand anyway.

Comment author: TheAncientGeek 23 January 2017 10:22:27PM 0 points [-]

Can you tell which problems can never be solved?

Comment author: eagain 02 February 2017 05:13:16AM 0 points [-]

Only an ill-posed problem can never be solved, in principle.

Comment author: TheAncientGeek 03 February 2017 01:40:53PM 0 points [-]

Is there a clear, algorithmic way of determining which problems are ill posed?

Comment author: Cloakless 16 July 2017 05:03:28PM 0 points [-]

Yeah, you just need a halting oracle and you're sorted.