I'm making my way through George Lakoff's works on metaphor and embodied thought; are familiar with the theory at all?
Unfortunately no, but from your description it seems quite like the theory of the mind of General Semantics.
Whereas what you're saying is starting with symbols, which I think would be the reverse of what he's saying?
Not exactly, because in the end symbols are just unit of perceptions, all distinct from one another. But while Lakoff's theory probably aims at psychology, logic is a denotational and computational tool, so it doesn't really matter if they aren't perfect inverse.
How does this connect to the map-territory distinction? Generally as I've understood it, logic is a form of map, but so too would be a model. Would a model be a map and logic be a map of a map? Am I getting that right?
Yes. Since a group of maps can be seen just as a set of things in itself, it can be treated as a valid territory. In logic there are also map/territory loops, where the formulas itself becomes the territory mapped by the same formulas (akin to talking in English about the English language). This trick is used for example in Goedel's and Tarski's theorems.
This is something that has always confused me, the probability definition wars. Is there really something to argue about here?
Yes. Basically the Bayesian definition is more inclusive: e.g. there is no definition of a probability of a single coin toss in the frequency interpretation, but there is in the Bayesian. Also in Bayes take on probability the frequentist definition emerges just as a natural by-product. Plus, the Bayesian framework produced a lot of detangling in frequentist statistics and introduced more powerful methods.
thank you to everyone for pointing me to Cox's Theorems again. I know I've seen them before, but I think they're starting to click a little bit more on this pass-over
The first two chapters of Jaynes' book, a pre-print version of which is available online for free, do a great job in explaining and using Cox to derive Bayesian probability. I urge you to read them to fully grasp this point of view.
And Richard Carrier's new book said that they're actually the same thing, which is just confusing
And easily falsifiable.
Also, am I doing it right for the one ontology and one interpretation that I've stumbled across, regardless of the others?
Yes, but remember that this measure interpretation of probability requires the set of possible world to be measurable, which is a very special condition to impose on a set. It is certainly very intuitive, but technically burdensome. If you plan to work with probability, it's better to start from a cleaner model.
Right, because in fuzzy logics the spectrum is the truth value (because being hot/cold, near/far, gay/straight, sexual/asexual, etc. is not an either/or), whereas with PTEL the spectrum is the level of certainty in a more staunch true/false dichotomy, right?
Yes. Fuzzy logic has an infinity of truth values for its propositions, while in PTEL every proposition is 'in reality' just true or false, you just don't know which is which, and so you track your certainty with a real number.
The other question I forgot to ask in the first post was how Bayes' Theorem interacts with group identity not being a matter of necessary and sufficient conditions, or for other fuzzy concepts like I mentioned earlier (near/far, &c.). For this would you just pick a mostly-arbitrary concept boundary so that you have a binary truth value to work with?
Yes, in PTEL you already have real numbers, so it's not difficult to just say "The tea is 0.7 cold", and provided you have a clean (that is, classical) interpretation for this, the sentence is just true or false. Then you can quantify you uncertainty: "I give 0.2 credence to the belief that the tea is 0.7 cold". More generally, "I give y credence to the belief that the tea is x cold".
What comes out is a probability distribution, that is the assignment of a probability value to every value of a parameter (in this case, the coldness of tea). Notice that this would be impossible in the frequentist interpretation.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
No, it doesn't. In the last LW survey, 0.7% of respondents identified as communists. Perhaps you're talking about the 30% that identified as "socialist"? But "socialism" in the survey was defined as support for a highly redistributive, socially permissive political regime, like they have in Scandinavian countries. That doesn't imply allegiance to Marxist doctrine, or knowledge of it.
As for the idea of "dialectic", Marx got it from Hegel, and a full understanding of Hegel -- if it is possible at all -- is not something that can be effectively communicated in a comment or short internet article, I think (this might help, but probably not). As a rough approximation, though, the dialectical method is basically just systems thinking.
It's presented as an alternative to the analytical method, which involves breaking a system down into parts and attempting to understand each part individually. The idea governing analytical thinking (says the proponent of dialectic) is that the intrinsic nature of the parts can be understood prior to figuring out how they fit together to form the whole.
Dialectical thinking, on the other hand, is based on the idea that the concrete nature of the parts cannot be understood without understanding the role they play in the whole, the relationships between them. So the analytic project, which focuses first on understanding parts considered individually, is doomed to failure, because it ignores the extent to which the overall context is essential for our understanding of the nature of the parts.
"Dialectical materialism" in Marxist thought is basically just an application of this dialectical thinking to economics. One could approach economics analytically by first, say, constructing a model of individual economic agents, and then trying to figure out what happens when these agents interact under certain conditions. The proponent of the dialectical method (like Marx) would, however, insist that this is mistaken. Human nature and human needs cannot be understood in isolation. They are a product of the socio-economic context, just as the socio-economic context is itself a product of human nature and needs, and the individual elements and overall context are constantly changing in response to one another. So to truly understand the dynamics of the economy, you need to approach it from a systems perspective. You need to start by understanding the historical dynamics of the interactions and relationships between elements of the system and how that effects the evolution of the natures of those elements, rather than starting with a static model of the individual elements and only then moving to an analysis of their interactions. The natures of individual elements are constituted by their participation in the system, and they change as the system evolves, so you shouldn't treat those individual natures as logically prior to the system.
So that's a quick and clumsy attempt at explicating what the dialectic method is all about. As for whether the method is useful: There is something right about the idea that focusing purely on the analytical method can lead to mistakes, but a complete repudiation of this very useful pattern of reasoning also seems to be a mistake. It seems to me that the dialectical method (and systems thinking in general) should be regarded as a useful complement (and often corrective) to analytical thinking, but not as a wholesale replacement.
Right, my mistake about the mistaken communism statistic; you're correct that I confused the two in my memory.
And that was a very thorough explanation; thank you. It seems to match what I could glean from my searches, but it was nice having it in one place and in more straight-forward terminology. So thank you.