Polytopos

Wiki Contributions

Comments

Sorted by

Science and Sanity looks pretty interesting. In the book summary it says he stressed that strict logical identity doesn't hold in reality. Can you say more about how he builds up a logical system without using the law of identity? How does equational reasoning work for example?

Answer by Polytopos80

Great question.

My joke answer is: probably Hegel but I don't know for sure because he's too difficult for me to understand.

My serious answer is Graham Priest, a philosopher and logician who has written extensively on paradoxes, non-classical logics, metaphysics, and theories of intensionality. His books are extremely technically demanding, but he is an excellent writer. To the extent that I've managed to understand what he is saying it has improved my thinking a lot. He is one of those thinkers who is simultaneously extremely big picture and also being super rigorous in the details and argumentation.

Ever since I first studied formal logic in my first year of undergrad, I always felt it had promise for clarifying our thinking. Unfortunately, in the next decade of my academic education in philosophy I was disappointed on the logic front. Logic seemed either irrelevant the questions I was concerned with or, when it was used, it seemed to flatten and oversimplify the important nuances. Discovering Priest's books (a decade after I'd left academic philosophy) fulfilled my youthful dreams of logic as a tool for supercharging philosophy. Priest uses the formalisms of logic like an artist to paint wonderous and sometimes alien philosophical landscapes.

Books by Priest in suggested reading order:

An Introduction to Non-Classical Logic: From If to Is. Cambridge University Press, Second Edition, 2008.

  • It's a great reference. You don't need to read every page, but it is very helpful to turn to when trying to make sense of the rest of Priest's work.

Beyond the Limits of Thought. Oxford University Press, Second (extended) Edition, 2002.

  • Presents the surprisingly central role of paradoxes throughout the history of philosophy.
  • The unifying theme is Priest's thesis that we humans really are able to think about the absolute limits of our own thought in spite of the fact that such thinking inevitably results in paradoxes.

One: Being an Investigation into the Unity of Reality and of its Parts, including the Singular Object which is Nothingness. Oxford University Press, 2014.

  • A study in the metaphysic of parts and wholes
  • A deeply counterintuitive but surprisingly powerful account based on contradictory entities

Towards Non-Being, 2nd (extended) edition. Oxford University Press, 2016.

  • An analysis of intensional mental states based on a metaphysics of non-existent entities.

Various fisheries have become so depleted as to no longer be commercially viable. One of the obvious examples is the Canadian Maritime fisheries. Despite advanced warning that overfishing was leading to a collapse in cod populations, they were fished to the point of commercial non-viability, resulting in a regional economic collapse that caused depressed standards of living in the maritime provinces to this day.

according to the story that your brain is telling, there is some phenomenology to it. But there isn't.

Doesn't this assume that we know what sort of thing phenomenological consciousness (qualia) is supposed to be so that we can assert that the story the brain is telling us about qualia somehow fails to measure up to this independent standard of qualia-reality?

The trouble I have with this is that there is no such independent standard for what phenomenal blueness has to be in order to count as genuinely phenomenal. The only standard we have for identifying something as an instance of the kind qualia is to point to something occurring in our experience. Given this, it remains difficult to understand how the story the brain tells about qualia could fail to be the truth, and nothing but the truth, about qualia (given the physicalist assumption that all our experience can be exhaustively explained through the brain's activity).

I see blue and pointing to the experience of this seeing is the only way of indicating what I mean when I say "there is a blue qualia". So to echo J_Thomas_Moros, any story the brain is telling that constitutes my experience of blueness would simply be the qualia itself (not an illusion of one).

For an in depth argument that could taken to support this point, I highly recommend Humankind: A Hopeful History by Rutger Bregman.

it generalises. Logic and probability and interpretation and theorisation and all that, are also outputs of the squishy stuff in your head. So it seems that epistemology is not first philosophy, because it is downstream of neuroscience.

I find this claim interesting. I’m not entirely sure what you intend by the word “downstream” but I will interpret it as saying that logic and probability are epistemically justified by neuroscience. In particular, I understand this to include that claim a priori intuition unverified by neuroscience is not sufficient to justify mathematical and logical knowledge. If by "downstream" you have some other meaning in mind, please clarify. However, I will point out that you can't simply mean causally downstream, i.e., the claim that intuition is caused by brain stuff, because a merely causal link does not relate neuroscience to epistemology (I am happy to expand on this point if necessary, but I'll leave it for now).

So given my reading of what you wrote, the obvious question to ask is, do we have to know neuroscience to do mathematics rationally? This would be news to Bayse who lived in the 18th century when there wasn’t much neuroscience to speak of. Your view implies that Bayse (or Euclid for that matter) were unjustified epistemically in their mathematical reasoning because they didn’t understand the neural algorithms underlying their mathematical inferences.

If this is what you are claiming, I think it’s problematic on a number of levels. First, on it faces a steep initial plausibility problem in that it implies mathematics as a field is unjustified for most of its thousands of years of history until some research in empirical science validates it. That is of course possible, but I think most rationalists would balk at seriously claiming that Euclid didn't know anything about geometry because of his ignorance of cognitive algorithms.

But a second deeper problem affects the claim even if one leaves off historical considerations and only looks at the present state of knowledge. Even today when we do know a fair amount about the brain and cognitive mechanisms, the idea that math and logic are epistemically grounded in this knowledge is viciously circular. Any sophisticated empirical science relies on the validity of mathematical inference to establish it’s theories. You can’t use neuroscience to validate statistics when the validity of neuroscientific empirical methods themselves depend on the epistemic bonafides of statistics. With logic the case is even more obvious. An empirical science will rely on the validity of deductive inference in formulating it’s arguments (read any paper in any scientific journal). So there is no chance that the rules of logic will be ultimately justified through empirical research. Note this isn't the same as saying we can't know anything without assuming the prior validity of math and logic. We might have lots of basic kinds of knowledge about tables and chairs and such, but we can't have sophisticated knowledge of the sort gained through rigorous scientific research as this relies essentially on complex reasoning for it's own justification.

An important caveat to this is that of course we can have fruitful empirical research into our cognitive biases. For example, the famous Wason selection task showed that humans in general are not very reliable at applying the logical rule of modus tollens in an abstract context. However, crucially, in order to reach this finding, Wason (and other researchers) had to assume that they themselves knew the right answer on the task. i.e.., the cognitive science researchers assumed the a priori validity of the deductive inference rule based on their knowledge of formal logic. The same is true for Kahneman and Tversky’s studies of bias in the areas of statistics and probability.

In summary, I am wholeheartedly in favour of using empirical research to inform our epistemology (in the way that the cognitive biases literature does). But there is a big difference between this and the claim that epistemology doesn't need anything in addition to empirical science. This is simply not true. Mathematics is the clearest example of why this argument fails, but once one has accepted its failure in the case of mathematics, one can start to see how it might fail in other less obvious ways.

This is a fascinating article about how the concept of originality differs in some Eastern cultures https://aeon.co/essays/why-in-china-and-japan-a-copy-is-just-as-good-as-an-original

An interesting contribution to is this book by Hofstadter and Sanders

They explain thinking in terms of analogy, which as they use the term encompasses metaphor. This book is the a mature cognitive sciencey articulation of many of the fun and loose ideas that Hofstadter first explored in G.E.B.

I'm curious how many people here think of rationalism as synonymous with something like Quinean Naturalism (or just naturalism/physicalism in general). It strikes me that naturalism/physicalism is a specific view one might come to hold on the basis of a rationalist approach to inquiry, but it should not be mistaken for rationalism itself. In particular, when it comes to investigating foundational issues in epistemology/ontology a rationalist should not simply take it as a dogma that naturalism answers all those questions. Quine's Epistemology Naturalized is an instructive text because it attempt actually to produce a rational argument for approaching foundational philosophical issues naturalistically. This is something I haven't seen much on LW, it usually seems like this is taken as an assumed axiom with no argument.

The value of attempting to make the arguments for naturalized epistemology explicit is that they can then be critiqued and evaluated. As it happens, when one reads Quine's work on this and thinks carefully about it, it becomes pretty evident that it is problematic for various reasons as many mainstream philosophers have attempted to make clear (e.g., the literature around the myth of the given).

I'd like to see more of that kind of foundational debate here, but maybe that's just because I've already been corrupted by the diseased discipline of philosophy ; )

You might be interested to look at David Corfield's book Modal Homotopy Type Theory. In the chapter on modal logic, he shows how all the different variants of modal logic can be understood as monads/comands. This allows us to understand modality in terms of "thinking in a context", where the context (possible worlds) can be given a rigorous meaning categorically and type theoretically (using slice categories).

Load More