New Comment
17 comments, sorted by Click to highlight new comments since:

Related: perverse ontological lock-in. Building things on top of ontological categories tends to cement them since we think we need them to continue getting value from the thing. But if the folk ontology doesn't carve reality at the joints there will be friction present in all the stories/predictions/expectations built up out of those ontological pieces along with an unwillingness to drop the folk ontology on the belief that you will lose all the value of the things you've built on top. One model of the punctuated equilibrium model of psychological development is periodic rebasing operations.

I am skeptical of this whole thing, because calling someone else's side of a debate a "folk ontology" assumes that their side is the wrong side. So the whole article is basically saying "now that I've determined that my opponent is wrong, how should I deal with it?"--it sounds like a recipe for skipping that pesky debate stuff and prematurely assuming that one's opponent is wrong.

This post was meant to apply when you find either that your own folk ontology is incorrect or to assist people who agree that the folk ontology is incorrect but find themselves disagreeing because they have chosen different responses. Establishing the folk ontology to be incorrect was a prerequisite and like all beliefs should be subject to revision based on new evidence.

This is in no way meant to dismiss genuine debate. As a moral nihilist, I might put moral realism in the category of incorrect "folk ontology". However, if I'm discussing or debating with a moral realist, I will have to engage their arguments not just dismiss it because I have already labeled their view as a folk ontology. In such a debate, it can be helpful to recognize which response I have taken and be clear when other participants may be adopting a different one.

Thanks for mentioning this concern. I'd kind of obliviously steelmanned it to "how to deal with my own folk ontologies", where my own intuitions about how something works doesn't match all the currently-available evidence. This happens to me on many many topics, and I gravitate toward the "restrict and recognize" mechanism, but was grateful to be reminded of other options.

If you're using this to categorize someone else's beliefs as "folk" when they don't agree that there are more complete models available, that's not likely to help much.

[-][anonymous]00

I think this is a valid point, but figuring out which side's ontology is "more accurate" is a different topic that just isn't what original essay was about.

I guess my point is that all ontologies are not "fundamentally correct" in the sense that your worldview of concepts exists as merely an abstraction layer over reality. But it could definitely be the case that certain ontologies are "more accurate", in that they have nicer mappings onto reality, or satisfy other properties which make them nicer to handle. In which case, you might find it both instrumentally / epistemically useful to try and convince others to adopt such ontologies (and depart from the ones they are using, hence providing the need for the techniques above).

(But that of course would require you to demonstrate the nicer properties of your ontology, etc. which is a different topic.)

This seems on the right track, but maybe not pushed far enough. I think all human abilities (instincts, math, philosophy, etc) arose in a specific kind of environment and can't be uniquely extended to all possible environments in our universe, never mind simulated ones. I don't trust any extrapolation procedures carried very far, because they seem too sensitive (our thinking about weird situations is already quite divergent). So I guess FAI will need to give us an environment that's good and mostly normal. That way it can skip rescuing many of our preferences, because in normal environments they work fine anyway.

In addition to the first two responses elucidated by Simon, there is a third available response to incorrect folk ontologies. This is to restrict the use of the idea to the circumstances or ways in which it can be reasonably applied while recognizing that it is fundamentally unreal.

In my opinion, the idea that some things are "fundamentally real", while others are "fundamentally unreal," is itself a folk ontology which should be rejected.

The article was rather optimistic about our ability to establish correspondence, rather than just attain stuff that works.

Exactly. The problem is that on the one hand, it is perfectly obvious that we say that something is true when it gives us correct expectations, and false when it gives us incorrect expectations. But on the other hand, we cannot explain what we mean by "correct" or "incorrect" expectations except by talking about correspondence or some equivalent. But then the idea of correspondence itself leads to incoherence (e.g. the Liar paradox.) There is no escape from this. This is why I recently commented that it is quite correct to see the Liar paradox as "deep and mysterious."

Is this supposed be little cute side notion or powerful counterargument?

Its possible to have better and worse ontologies even if philosophers cant solve what is the right theory of truth. One could answer to the liars paradox based on Russells, Tarskis, Kripkes or Priests ideas but this is irrelevant IF one is interested about actually having accurate beliefs. It is not necessary to have completely water tight necessary and sufficient theory of the truth to be able to rank beliefsystems based on evidence at hand and evidence about human cognitive tendencies to create predictable folk theories.

It is not a little cute side notion. Nor is it a very substantial argument as presented, but it is a crack in the OP's position that points in the direction of powerful counterarguments, ones strong enough to establish that their position is plain wrong. This why I said the OP's supposed ability to distinguish between "fundamentally real" and "fundamentally unreal" is itself a folk ontology.

Let me point in the direction of a truer theory of reality. Consider the special theory of relativity. Here you can have two objects moving apart, without one of them being "really moving" and the other not. They are simply moving relative to one another. Now it is possible for someone to object: "Look. They were close together before. Now they are not. So something is different. Which one really changed? One of them must have changed for -real-, in order to end up in a different situation." But the response is that the notion of "real change" here is fundamentally misguided. "Moving relative to another" is itself the fundamental thing, and does not have to be based on a non-relative "real" motion.

The OP's notion of "fundamentally real" is misguided in a similar way, like the idea of "real motion." I would propose an existential theory of relativity. To say that something exists, is to say that it exists relative to a reference frame, one which is often constituted by an observer, although not necessarily only in this way. In one way this is obvious: it is even typical to say that things in the past and the future do not exist, and the only way this is true is in relation to the one who is talking about them. But to see this more clearly, consider how skeptical scenarios would work. Suppose someone is a brain in a vat. If the person in the vat says, "I am a brain in a vat," it is evident that their statement is false. For their word "brain" refers to something in their simulation, and likewise their word "vat" refers to something in their simulation. And in terms of those things, they are not a brain in a vat. So they speak the truth only by saying "I am not a brain in a vat." On the other hand, because we have a different reference frame, we can truly say, "They are a brain in a vat."

Likewise, I pointed out not that long ago that the words "I am not a Boltzmann brain" are necessarily true in my reference frame. But their might be some other reference frame where someone could truly call me a Boltzmann brain. And like in relativity, each of us might be a Boltzmann brain in the other's reference frame, but not in their own.

I started to write a lot more here to establish the truth of this position and to manifest other flaws in the OP's position, but then erased the rest because this comment is not the place to establish the true nature of reality.

But one obvious point. I am not saying that some ontologies are not truer than others. In fact, I am saying that the OP's ontology is very wrong, and not only in terms of what he places on various sides of "real" and "unreal" but in the very division itself.

First of all thats wrong level of analysis. There is nothing relativistic about the theory of relativity itself. Proper analogy would be between theories/ontologies/belief systems not in terms of the content of those theories.

No reference frame makes Newtons, Thomas Youngs, Augustine-Jean Fresnels or Ernest Machs ideas about motion less or more right compared to Einsteins. You need evidence to value the ontologies, even if the content is relativistic.

No reference frame makes Newtons, Thomas Youngs, Augustine-Jean Fresnels or Ernest Machs ideas about motion less or more right compared to Einsteins.

Exactly. Newton's idea is that either a thing is in motion or it is not, absolutely speaking, without considering a reference frame. This is false.

I'm glad I'm not the only one to have noticed.

When we find that the concepts typically held by people, termed folk ontologies, don't correspond to the territory, what should we do with those terms/words? This post discusses three possible ways of handling them. Each is described and discussed with examples from science and philosophy.

Relevant: a review of Seeing Like a State (again :-D).

In particular, this part:

Scott distinguishes between metis and epistemic knowledge. Epistemic knowledge is from the state, and it’s often called “rational”. It’s based in scientific or “scientificish” knowledge. It’s so general as to apply everywhere, which means it kind of winds up applying nowhere. Sometimes this looks like farming techniques that are great in theory, but in practice are incapable of adapting to any single locale. Other times it looks like geometric land distribution that ignores local conditions.

Metis is much more ambigious, a strange mix of hyper-empiricism and tradition and encoded ritual that has adapted to produce best results within a specific context. The origins of it would take a whole other book (presumably filled with Darwinian metaphors), but the point is that it works. It’s a reason that (using repeated examples of Scott’s) small village farms tend to vastly out-produce large epistemic plots;

...

In general: people have a reason they do things the way they do. The issues that arise aren’t just “humanitarian” in the sense of “people don’t like it. They’re actually pretty pragmatic, and governments that fail to recognize this (or ignore concerns as merely “petty humanitarianism”) tend towards economic and agricultural chaos. Disrupting this from the outside normally means that you don’t actually understand them. Almost always, that hubris will make life less productive and less enjoyable.

[-][anonymous]00

This was a very good essay. Thank you for writing it; it's a nice categorization of things I've been trying to sort out in my own interactions with "folk ontologies".

One small thing is that you mention moral nihilism is a more productive response, but you don't seem to follow it up with much justification (this is in the Rejection section).