Peterdjones comments on Pancritical Rationalism Can Apply to Preferences and Behavior - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (29)
I’d first like to congratulate you on a much more reasonable presentation of Popperian ideas than the recent trolling.
What about beliefs being justified by non-beliefs? If you're a traditional foundationalist, you think everything is ultimately grounded in sense-experience, about which we cannot reasonably doubt.
Also, what about externalism? This is one of the major elements of modern epistemology, as a response to such skeptical arguments.
I don't mean to imply that either of these is correct, but it seems that if one is going to attempt to use disjunctive syllogism to argue for anti-justificationism, you ought to be sure you've partitioned the space of reasonable theories.
This example seems anomalous. If there exists some H such that, if P(H) > 0.9, you lose the ability to choose P(H), you might want to postpone believing in it for prudent reasons. But these don’t really bear on what the epistemically rational level of belief is (Assuming remaining epistemically rational is not part of formal epistemic rationality).
Furthermore, if you adopted a policy of never raising P(H) above 0.9, it’d be just like you were stuck with P(H) < 0.9 !
It seems that there is a big difference between the two cases. We can criticize beliefs because we have a standard by which to measure them – reality, in the same way that we can criticize maps if they’re not very accurate representations of the territory. But it’s not at all clear that we have anything analogous with preferences. True, you could criticize my short term preference of going to lecturers as ineffective towards my long-term goal of getting my degree, but there doesn’t seem to be any canonical metric by which to criticize deep, foundational preferences.
One of the most important aspects of our epistemology with regards factual beliefs is that the set of beliefs a computationally unlimited agent should believe is uniquely determined by the evidence it has witnesses. However, this doesn’t seem to be the case with preferences: if I have a single long term, there’s no proof it should be {live a long time} rather than {die soon}. Without a constraining external metric, there are many consistent sets, and the only criticism you can ultimately bring to bear is one of inconsistency.
Yes, Barrtley's justification munges together to different ideas:
1) beliefs can only be justified by other beliefs 2) beliefs can be positively supported and not just refuted/criticised.
The attack on "justificationism" is actually a problem for Popperiansim, since a classic refutation is a single observation such as a Black Swan. However, if my seeing one black swan doesn't justify my belief that there is at least one black swan, how can I refute "all swans are white"?
Refuting something is justifying that it is false. The point of the OP is that you can't justify anything, so it's claiming that you can't refute "all swans are white". A black swan is simply a criticism of the statement "all swans are white". You still have a choice -- you can see the black swan and reject "all swans are white", or you can quibble with the evidence in a large number of ways which I'm sure you know of too and keep on believing "all swans are white". People really do that; searching Google for "Rapture schedule" will pull up a prominent and current example.
Fine. If criticism is just a loose sort of refutation, then I'll invent something that is just a loose kind of inductive support, let's say schmitticism, and then I'll claim that every time I see a white swan, that schmitticises the claim that all swans are white, and Popper can't say schmitticisim doesn't work because there are no particular well-defined standards or mechanisms of schmitticism for his arguments to latch onto.
Why not just phrase it in terms of utility? "Justification" can mean too many different things.
Seeing a black swan diminishes (and for certain applications, destroys) the usefulness of the belief that all swans are white. This seems a lot simpler.
Putting it in terms of beliefs paying rent in anticipated experiences, the belief "all swans are white" told me to anticipate that if I knew there was a black animal perched on my shoulder it could not be a swan. Now that belief isn't as reliable of a guidepost. If black swans are really rare I could probably get by with it for most applications and still use it to win at life most of the time, but in some cases it will steer me wrong - that is, cause me to lose.
So can't this all be better phrased in more established LW terms?
I think you've just reinvented pragmatism.
ETA: Ugh, that Wikipedia page is remarkably uninformative... anyone have a better link?