Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: JenniferRM 25 July 2017 06:54:24AM *  1 point [-]

I suspect that you are leaping to the idea of "infinite regress" much too quickly, and also failing to look past it or try to simply "patch" the regress in a practical way when you say:

Evaluating the efficiency of a given prior distribution will be done over the course of several experiments, and hence requires a higher order prior distribution (a prior distribution over prior distributions). Infinite regress.

Consider the uses that the Dirichlet distribution is classically put to...

Basically, if you stack your distributions two or three (or heaven forbid four) layers deep, you will get a LOT of expressiveness and yet the number of steps up the abstraction hierarchy still can be counted with the fingers of one hand. Within only a few thousand experiments even the topmost of your distributions will probably start acquiring a bit of shape that usefully informs subsequent experiments.

Probably part of the reason you seem to give up at the first layer of recursion and just assume that it will recurse unproductively forever is that you're thinking in terms of some small number of slogans (axioms?) that can be culturally transmitted in language by relatively normal people engaging in typical speech patterns, perhaps reporting high church Experiments that took weeks or months or years to perform, and get reported in a peer reviewed journal and so on.

Rather than conceptually center this academic practice, perhaps it would make more sense to think of "beliefs" as huge catalogues of microfacts, often subverbal, and "experiments" as being performed by even normal humans on the time scales of milliseconds to minutes?

The remarkable magical thing about humans is not that we can construct epistemies, the remarkable thing is that humans can walk, make eye contact and learn things from it, feed ourselves, and pick up sticks to wave around in a semi-coordinated fashion. This requires enormous amounts of experimentation, and once you start trying to build them from scratch yourself you realize the models involved here are astonishing feats of cognitive engineering.

Formal academic science is hilariously slow by comparison to babies.

The problems formal intellectual processes solve is not the problem of figuring things out quickly and solidly, but rather (among other things) the problem of lots of people independently figuring out many of the same things in different orders with different terminology and ending up with the problem of Babel.

Praise be to Azathoth, for evolution already solved "being able to learn stuff pretty good" on its own and delivered this gift to each of us as a birthright. The thing left to us to to solve something like the "political economy of science". Credit assignment. Re-work. Economies of scale... (In light of social dynamics, Yvain's yearly predictions start to make a lot more sense.)

A useful keyword here is "social epistemology" and a good corpus of material is the early work of Kevin Zollman, including this overview defending the conceptual utility of social epistemology as a field.

Comment author: Onemorenickname 28 July 2017 06:49:21PM 0 points [-]

I suspect that you are leaping to the idea of "infinite regress" much too quickly, and also failing to look past it or try to simply "patch" the regress in a practical way when you say

No. I mention the practical patch right after : epistemies.

The remarkable magical thing about humans is not that we can construct epistemies, the remarkable thing is that humans can walk, make eye contact and learn things from it, feed ourselves, and pick up sticks to wave around in a semi-coordinated fashion.

Formal academic science is hilariously slow by comparison to babies.

Those are two different fields, with different problems. My answer to your thing is that we have embedded epistemological/ontological when we are born. From a different line of comments :

However, let's say we consider naive observation and innate reasoning as being part of a proto-epistemy. Then we have to acknowledge too that we have a fair-share of embedded ontological knowledge that we don't gain through experience, but that we have when we are born. (Time, space, multiplicity, weight, etc.). This is paramount, as without that, we would actually be trapped in infinite regress.

The problems formal intellectual processes solve is not the problem of figuring things out quickly and solidly

Well, formal verification, proof systems, NLP and AGI are a thing. So I disagree.

The thing left to us to to solve something like the "political economy of science".

No, there are plenty of other things. Including the aforementioned one. But more primary is fixing the "gift as a birthright". That's the point of rationalism. Our innate epistemy is a bad one. It lets us walk, gather sticks and talk with people, but it makes for bad science most of the time.

A useful keyword here is "social epistemology"

Thanks for the pointer. Checking it.

Comment author: Bound_up 25 July 2017 11:26:37PM 0 points [-]

It's possible that LW people are the "nerds" I mean here, and normal nerds don't have beliefs either, as you say...It's hard for me to distinguish between how much I owe to LW and how much is instinctive.

But, since well before LW, I was always explicitly willing to sacrifice any belief, like my God belief, if there was no reason to hold it. There's that, at least; I think there are meaningful instinctive differences

Comment author: Onemorenickname 25 July 2017 11:40:28PM 1 point [-]

Indeed, we were talking about rationalists (not only LW, but SlateStarCodex too for instance).

I think there are meaningful instinctive differences too, but that's not the point, is it ? If it was, then we can assume that people holds beliefs too. Sometime they change their beliefs too because of reasons (or lack thereof).

Comment author: Onemorenickname 25 July 2017 11:06:18PM 3 points [-]

I agree with your model, but without the nerd-exception.

The lack of nerd focus on epistemology and meta-ethics implies that nerds don't have beliefs either.

They do have pressures to appear rational. Either external (peer pressure) or internal (intelligence/rationality being part of the core identity because of reasons).

The same model you mention has been useful for me in understanding why nerdy people don't actually care about the epistemic soundness of their argument, and only about sounding rational. It made me understand why many were angered when I pointed the lack of sound definition of the words used or the use of countless fallacies : it's perceived as an attack against their rationality.

Comment author: gworley 21 July 2017 09:38:48PM 1 point [-]

I take ontology and epistemology as separate. A science is defined by its object (ontos), and by its method (epistemy). Given I can make both arguments for different sciences (where their ontos come before their epistemy, and where their epistemy come before their ontos), I see them as separate.

Great! My misunderstanding.

When you say "that is ontology comes first but we have to experience it so there's no way for us to know that where epistemology isn't prior", you beg the question : you assume there is no higher order ontological knowledge, but that there is higher order epistemological knowledge (without which we couldn't have relevant experiments). And I can derive epistemological knowledge from observation and ontological knowledge as much as I can derive ontological knowledge from experiments and epistemological knowledge.

I suppose I do insofar as the very act of experiencing experience is experience and thus by at all noticing your experience you know a way of knowing. And although you may infer things about epistemology from ontology, you cannot derive them because ontology must be constructed from knowledge gained through experience (at least if we demand a phenomenological account of knowledge), and thus all ontology is tainted by the epistemological methods of experience used to gain such knowledge.

Comment author: Onemorenickname 22 July 2017 05:35:07AM 1 point [-]

I suppose I do insofar as the very act of experiencing experience is experience and thus by at all noticing your experience you know a way of knowing. And although you may infer things about epistemology from ontology, you cannot derive them because ontology must be constructed from knowledge gained through experience (at least if we demand a phenomenological account of knowledge), and thus all ontology is tainted by the epistemological methods of experience used to gain such knowledge.

Naive observation precedes any epistemic method to gain knowledge. However, let's say we consider naive observation and innate reasoning as being part of a proto-epistemy. Then we have to acknowledge too that we have a fair-share of embedded ontological knowledge that we don't gain through experience, but that we have when we are born. (Time, space, multiplicity, weight, etc.).

This is paramount, as without that, we would actually be trapped in infinite regress.

Comment author: gworley 20 July 2017 08:22:10PM 1 point [-]

It's more that different fields of inquiry lead to different epistemies. If you want to study different fields, you have no a priori reason to use the same epistemy for both.

But you do because fields are just an after-the-fact construction to make understanding reality more manageable. There's just one reality (for a phenomenologically useful sense of "reality" as the thing which you experience), fields just pick a part of it to focus on, and as such there is much overlap between how we know things in fields.

To be concrete about it, there are many fields we consider part of science and they all use the shared epistemological methods of science to explore particular topics. We don't reinvent science for physics, biology, etc. each time because each field is really just choosing to focus on a particular part of the questions science is designed to answer.

I think there's also a deeper confusion here where you seem to be thinking as if ontology comes first. That is, you are taking a transcendental stance. Otherwise you would see an a priori reason to use the same epistemology in multiple fields because epistemology would be prior to ontology. However the only way the transcendental stance is defensible is if it's unnecessary: that is ontology comes first but we have to experience it so there's no way for us to know that where epistemology isn't prior. Failing the test of parsimony, we should then reject transcendentalism anyway within our understanding.

Comment author: Onemorenickname 21 July 2017 02:38:16AM 0 points [-]

But you do because fields are just an after-the-fact construction to make understanding reality more manageable. There's just one reality (for a phenomenologically useful sense of "reality" as the thing which you experience), fields just pick a part of it to focus on, and as such there is much overlap between how we know things in fields.

I disagree thoroughly with that paragraph.

Science is not about "understanding reality". Or at least, not the "reality" as "the thing which you experience". The impact of science in "the thing which we experience" can only be seen through pragmatism. Quantum physics is good not because it gives to some of us a more manageable understanding of reality, but because it gives to all of us tools relying on quantum effects.
If we talk about science as "understanding reality", then it's not "the thing which we experience". And in that case, science understands many different, sometimes independent realities.

"as such there is much overlap between how we know things in fields". There are only small overlaps between NLP, linguistics and cognitive psychology, all three studying natural languages. There are strong differences between logic from a philosophical point of view, logic from a mathematical foundations point of view and logic from a CS point of view., all three studying logic. A science is defined by its object and by its method.

To be concrete about it, there are many fields we consider part of science and they all use the shared epistemological methods of science to explore particular topics.

Well, if you put all the methods used in different sciences in the common sets of "shared epistemological methods of science", then I have to tautologically agree. But as well as concrete differences (a chemical experimental protocol is very different from a physical one), there are abstract differences (controlled experiments, natural experiments, historical inquiry, formal proof, naked human reasoning). So I don't understand your point.

We don't reinvent science for physics, biology, etc. each time because each field is really just choosing to focus on a particular part of the questions science is designed to answer.

Well, if a field is solely an "after-the-fact construction", there is no intention or design in fields.
Putting that aside, my explanation to the fact that we don't reinvent science every time is more down-to-earth : tragedy of the commons and chronology. Focusing on epistemology is hard and time consuming, and doesn't benefit individuals, but everyone at the same time. Except in particular instances (foundational crisis), researchers won't take the burden on themselves. Also, epistemology advances came after these fields were set.

I think there's also a deeper confusion here where you seem to be thinking as if ontology comes first. That is, you are taking a transcendental stance.

I take ontology and epistemology as separate. A science is defined by its object (ontos), and by its method (epistemy). Given I can make both arguments for different sciences (where their ontos come before their epistemy, and where their epistemy come before their ontos), I see them as separate.

When you say "that is ontology comes first but we have to experience it so there's no way for us to know that where epistemology isn't prior", you beg the question : you assume there is no higher order ontological knowledge, but that there is higher order epistemological knowledge (without which we couldn't have relevant experiments). And I can derive epistemological knowledge from observation and ontological knowledge as much as I can derive ontological knowledge from experiments and epistemological knowledge.

Comment author: gworley 19 July 2017 09:41:29PM 1 point [-]

I guess it's somewhat unclear to me just what work "epistemy" is doing given how you try to use it in your first question. Certainly a person's epistemology affects their understanding of many things, and recognizing weaknesses in epistemology may be exposed by pursuing particular fields of inquiry, but then you ask to "define an epistemy to build new models of human psychology" and that seems like a teleological approach to epistemology which, if I'm honest, seems entirely backwards from the rationalist approach (but maybe that's what you're going for?).

I guess I'm also somewhat unclear on what binds these ideas/questions together. I think you know but it's not immediately obvious to me beyond saying it's very broadly all about knowing, but then so is everything.

Comment author: Onemorenickname 20 July 2017 02:02:43PM *  0 points [-]

Certainly a person's epistemology affects their understanding of many things

I think having an epistemy to deal with everything is a mistake. It stems from the post that the strength of an epistemy lies from its specialization.

I guess it's somewhat unclear to me just what work "epistemy" is doing

I don't understand "what work is [X] doing" means in this context.

that seems like a teleological approach to epistemology

It's more that different fields of inquiry lead to different epistemies. If you want to study different fields, you have no a priori reason to use the same epistemy for both.

I guess I'm also somewhat unclear on what binds these ideas/questions together.

I don't know of a Bayesianist account of epistemies. As such, I'm shotgunning questions aiming to reveal it. The questions are spread on different fields, and different position on the abstract-concrete spectrum.

Looking for ideas about Epistemology related topics

1 Onemorenickname 19 July 2017 06:56PM

Notes :

  • "Epistemy" refers to the second meaning of epistemology : "A particular theory of knowledge".
  • I'm more interested in ideas to further the thoughts exposed here than exposing them.

 

Good Experiments

The point of "Priors are useless" is that if you update after enough experiments, you tend to the truth distribution regardless of your initial prior distribution (assuming its codomain doesn't include 0 and 1, or at least that it doesn't assign 1 to a non-truth and 0 to a truth). However, "enough experiments" is magic :

  1. The pure quantitative aspect : you might not have time to do these experiments in your lifetime.
  2. Having independent experiments is not defined. Knowing which experiments are pairwise independent embeds higher-level knowledge that could easily be used to derive truths directly. If we try to prove a mathematical theorem, comparing the pairwise success probability correlations of different approaches would give much more insights and results than trying to prove it as usual.
  3. We don't need pairwise independence. For instance, assuming we assume P=/=NP because we couldn't prove it, we assume so because we expect all used techniques not to be all correlated together. However, this expectation is ether wrong (Small list of fairly accepted conjectures that were later disproved), or stems from higher-order knowledge (knowledge about knowledge). Infinite regress.

Good Priors

However, conversely, having a good prior distribution is magic too. You can have a prior distribution affecting 1 to truths, and 0 to non-truths. So you might want the additional requirement that the prior distribution has to be computable. But there are two problems :

  1. There aren't many known computable prior distribution. Occam's razor (in term of Kolmogorov complexity in a given language) is one. But fails miserably in most interesting situations. Think of poker, or a simplified version thereof : A+K+Q. If someone bets, the simplest explanation is that he has good cards. Most interesting situations where we want to apply bayesianism are from human interactions (we managed to do hard sciences before bayesianism, and we still have troubles with social sciences). As such, failing to take into account bluff is a big epistemic fault for a prior distribution.
  2. Evaluating the efficiency of a given prior distribution will be done over the course of several experiments, and hence requires a higher order prior distribution (a prior distribution over prior distributions). Infinite regress.

Epistemies

In real-life, we don't encounter these infinite regresses. We use epistemies. An epistemy is usually a set of axioms, and a methodology to derive truths with these axioms. They form a trusted core, that we can use if we understood the limits of the underlying meta-assumptions and methodology.

Epistemies are good, because instead of thinking about the infinite chain of higher priors every time we want to prove a simple statement, we can rely on an epistemy. But they are regularly not defined, not properly followed or not even understood. Leading to epistemic faults.

Questions

As such, I'm interested in the following :

  • When and how do we define new epistemies ? Eg, "Should we define an epistemy for evaluating the Utility of actions for EA ?",  "How should we define an epistemy to build new models of human psychology ?", etc.
  • How to account for epistemic changes in Bayesianism ? (This requires self-reference, which Bayesianism lacks.)
  • How to make sense of of Scott Alexander's yearly predictions ? Is it only a blackbox telling us to bet more on future predictions, or do we have a better analysis ?
  • What prior distributions are interesting to study human behavior ? (For a given restricted class of situations, of course.)
  • Are answers to the previous questions useful ? Are the previous questions meaningful ?

I'm looking for ideas and pointers/links.

Even if your thought seems obvious, if I didn't explicitly mention it, it's worth commenting it. I'll add it to this post.

Even if you only have idea for one of the question, or a particular criticism of a point made in the post, go on.

 

Thank you for reading this far.

Comment author: SamDeere 27 June 2017 08:42:09PM 0 points [-]

Thought I had, turns out you need to verify separately for the wiki and the forum. Thanks Julia for posting.

Comment author: Onemorenickname 02 July 2017 06:29:26AM 0 points [-]

Thank you for your thorough answer. :)

Comment author: Viliam 26 June 2017 10:18:38AM 0 points [-]

Both. Christian posted 2 FB groups, 1 subreddit, and one separate page; that seems quite enough to me, for general discussion. How much would you consider optimal?

If the user base grows, that is a good thing, but if the communication costs grow, that is a bad thing. So the communication needs to be organized effectively. I will assume there is a usual distribution that a few core people are doing most of the work, and the majority of people is mostly or exclusively there to chat. Two ways how things could go wrong:

1) The more channels and the more debate there is, the more time will the important people spend participating in the discussion, and the less time will be left for their work.

2) The important people will not participate in some of the discussions, which means someone else (perhaps the person with most free time, or most loud voice) will take over, not necessarily in a good way.

As an example of the latter, there was a FB group called "Less Wrong" where Eliezer didn't have time to participate, and it evolved into something... uhm, not representative of LW... so at the end Eliezer asked them to change their name, because the association seemed harmful for LW.

Comment author: Onemorenickname 27 June 2017 08:25:29AM *  0 points [-]

I am not asking for a general discussion place, but for an idea repository with dedicated discussion places.

From the post :

The current forum doesn’t cut it : it isn’t meant to that end. It’s easier to build a forum dedicated to that than try to artificially support a balance between “New Ideas” posts and “Information Sharing” posts so that none of these get overshadowed. The same problem applies to existing reddit boards and facebook groups.

Also, regular discussion places (reddit, fb), aren't really meant as thread repository : pinning more and more threads is a nuisance to the discussion part.

Comment author: casebash 25 June 2017 06:13:24AM *  1 point [-]

Additionally, how come you posted here instead of on the Effective Altruism forum: http://effective-altruism.com/?

Comment author: Onemorenickname 27 June 2017 08:23:16AM 0 points [-]

I initially needed an editor I was used to to link a post to someone on the EA Discord Server.

I thought I might as well do it on LW to gather input from LWians.

View more: Next