Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: IlyaShpitser 22 November 2017 03:24:07PM *  0 points [-]

Yeah, credentials are a poor way of judging things.

They are not, though. It's standard "what LW calls 'Bayes' and what I call 'reasoning under uncertainty'" -- you condition on things associated with the outcome, since those things carry information. Outcome (O) -- having a clue, thing (C) -- credential. p(O | C) > p(O), so your credence in O should be computed after conditioning on C, on pain of irrationality. Specifically, the type of irrationality where you leave information on the table.


You might say "oh, I heard about how argument screens authority." This is actually not true though, even by "LW Bayesian" lights, because you can never be certain you got the argument right (or the presumed authority got the argument right). It also assumes there are no other paths from C to O except through argument, which isn't true.

It is a foundational thing you do when reasoning under uncertainty to condition on everything that carries information. The more informative the thing, the worse it is not to condition on it. This is not a novel crazy thing I am proposing, this is bog standard.


The way the treatment of credentialism seems to work in practice on LW is a reflexive rejection of "experts" writ large, except for an explicitly enumerated subset (perhaps ones EY or other "recognized community thought leaders" liked).

This is a part of community DNA, starting with EY's stuff, and Luke's "philosophy is a diseased discipline."

That is crazy.

Comment author: gjm 22 November 2017 05:16:07PM 0 points [-]

They are not, though.

Actually, I somewhat agree, but being an agreeable sort of chap I'm willing to concede things arguendo when there's no compelling reason to do otherwise :-), which is why I said "Yeah, credentials are a poor way of judging things" rather than hedging more.

More precisely: I think credentials very much can give you useful information, and I agree with you that argument does not perfectly screen off authority. On the other hand, I agree with prevailing LW culture (perhaps with you too) that credentials typically give you very imperfect information and that argument does somewhat screen off authority. And I suggest that how much credentials tell you may vary a great deal by discipline and by type of credentials. Example: the Pope has, by definition, excellent credentials of a certain kind. But I don't consider him an authority on whether any sort of gods exist because I think the process that gave him the credentials he has isn't sufficiently responsive to that question. (On the other hand, that process is highly responsive to what Catholic doctrine is and I would consider the Pope a very good authority on that topic even if he didn't have the ability for control that doctrine as well as reporting it.)

It seems to me that e.g. physics has norms that tie its credentials pretty well (though not perfectly) to actual understanding and knowledge; that philosophy doesn't do this so well; that theology does it worse; that homeopathy does it worse still. (This isn't just about the moral or cognitive excellence of the disciplines in question; it's also that it's harder to tell whether someone's any good or not in some fields than in others.)

Comment author: curi 20 November 2017 09:15:59PM *  0 points [-]

Here's a tricky example of judging authority (credentials). You say listen to SA about QM. Presumably also listen to David Deutsch (DD), who knows more about QM than SA does. But what about me? I have talked with DD about QM and other issues at great length and I have a very accurate understanding of what things I cay say about QM (and other matters) that are what DD would say, and when I don't know something or disagree with DD. (I have done things like debate physics, with physicists, many times, while being advised by DD and him checking all my statements so I find out when I have his views right or not.) So my claims about QM are about as good as DD's, when I make them – and are therefore even better than SA's, even though I'm not a physicist. Sorta, not exactly. Credentials are complicated and such a bad way to judge ideas.

What I find most people do is decide what they want to believe or listen to first, and then find an expert who says it second. So if someone doesn't want to listen, credentials won't help, they'll just find some credentials that go the other way. DD has had the same experience repeatedly – people aren't persuaded due to his credentials. That's one of the main reasons I'm here instead of DD – his credentials wouldn't actually help with getting people here to listen/understand. And, as I've been demonstrating and DD and I already knew, arguments aren't very effective here either (just like elsewhere).

And I, btw, didn't take things on authority from DD – I asked questions and brought up doubts and counter-arguments. His credentials didn't matter to me, but his arguments did. Which is why he liked talking with me!

Comment author: gjm 21 November 2017 10:18:25PM 0 points [-]

Yeah, credentials are a poor way of judging things. But that first paragraph doesn't show remotely what you think it does.

Some of David Deutsch's credentials that establish him as a credible authority on quantum mechanics: He is a physics professor at a leading university, a Fellow of the Royal Society, is widely recognized as a founder of the field of quantum computation, and has won some big-name prizes awarded to eminent scientists.

Your credentials as a credible authority on quantum mechanics: You assure us that you've talked a lot with David Deutsch and learned a lot from him about quantum mechanics.

This is not how credentials work. Leaving aside what useful information (if any) they impart: when it comes to quantum mechanics, David Deutsch has credentials and you don't.

It's not clear to me what argument you're actually making in that first paragraph. But it seems to begin with the claim that you have good credentials when it comes to quantum mechanics for the reasons you recite there, and that's flatly untrue.

Comment author: curi 21 November 2017 04:13:57AM 0 points [-]

you have openly stated your unwillingness to

1) do PF

2) discuss PF or other methodology

that's an impasse, created by you. you won't use the methodology i think is needed for making progress, and won't discuss the disagreement. a particular example issue is your hostility to the use of references.

the end.

I am very willing to have a conversation.

given your rules, including the impasse above.

Comment author: gjm 21 November 2017 01:37:35PM 2 points [-]

you have openly stated your unwillingness [...]

Yup. I'm not interested in jumping through the idiosyncratic set of hoops you choose to set up.

that's an impasse, created by you.

Curiously, I find myself perfectly well able to conduct discussions with pretty much everyone else I encounter, including people who disagree with me at least as much as you do. That would be because they don't try to lay down a bunch of procedural rules and refuse to engage unless I either follow their rules or get sidetracked onto a discussion of those rules. So ... nah, I'm not buying "created by you". I'm not the one who tried to impose the absurdly over-demanding set of procedural rules on a bunch of other people.

your hostility to the use of references

You just made that up. I am not hostile to the use of references.

(Maybe I objected to something you did that involved the use of references; I don't remember. But if I did, it wasn't because I am hostile to the use of references.)

Comment author: curi 20 November 2017 03:57:00AM 0 points [-]

you haven't cared to try to write down, with permalink, any errors in CR that you think could survive critical scrutiny.

by study i mean look at it enough to find something wrong with it – a reason not to look further – or else keep going if you see no errors. and then write down what the problem is, ala Paths Forward.

the claims made by some c.r. proponents

it's dishonest (or ignorant?) to refer to Popper, Deutsch and myself (as well as Miller, Bartley, and more or less everyone else) as "some c.r. proponents".

you refuse to try to quantify how error-prone any particular judgement is.

no. i have tried and found it's impossible, and found out why (arguments u don't wish to learn).

anyway i don't see what your comment is supposed to accomplish. you have 1.8 of your feet out the door. you aren't really looking to have a conversation to resolve the matter. why speak at all?

Comment author: gjm 21 November 2017 03:29:29AM 0 points [-]

you haven't cared to [...]

Correct: I am not interested in jumping through the idiosyncratic set of hoops you choose to set up.

it's dishonest (or ignorant?) [...]

Why?

arguments you don't wish to learn

Don't wish to learn them? True enough. I don't see your relationship to me as being that of teacher to learner. I'd be interested to hear what they are, though, if you could drop the superior attitude and try having an actual discussion.

I don't see what your comment is supposed to accomplish.

It is supposed to point out some errors in things you wrote, and to answer some questions you raised.

you have 1.8 of your feet out the door.

Does that actually mean anything? If so, what?

you aren't really looking to have a conversation to resolve the matter.

I am very willing to have a conversation. I am not interested in straitjacketing that conversation with the arbitrary rules you keep trying to impose ("paths forward"), and I am not interested in replacing the (to me, potentially interesting) conservation about probability and science and reasoning and explanation and knowledge with the (to me, almost certainly boring and fruitless) conversation about "paths forward" that you keep trying to replace it with.

why speak at all?

See above. You said some things that I think are wrong, and you asked some questions I thought I could answer. It's not my problem that you're unable or unwilling to address any of the actual content of what I say and only interested in meta-issues.

Comment author: curi 20 November 2017 01:41:40AM 0 points [-]

Does it make sense to say that the probability of making the mistake in the judgment B is higher than the probability of making the mistake in the judgment A?

It may or may not make sense, depending on terminology and nuances of what you mean, for some types of mistakes. Some categories of error have some level of predictability b/c you're already familiar with them. However, it does not make sense for all types of mistakes. There are some mistakes which are simply unpredictable, which you know nothing about in advance. Perhaps you can partly, in some way, see some mistakes coming – but that doesn't work in all cases. So you can't figure out any overall probability of some judgement being a mistake, because at most you have a probability which addresses some sources of mistakes but others are just unknown (and you can't combine "unknown" and "90%" to get an overall probability).

I am a fallibilist who thinks we can have neither 100% certainty nor 90% certainty nor 50% certainty. There's always framework questions too – e.g. you may say according to your framework, given your context, then you're unlikely (20%) to be mistaken (btw my main objections remain the same if you stop quantifying certainty with numbers). But you wouldn't know the probability your framework has a mistake, so you can't get an overall probability this way.

Difficult to do, and even more difficult to justify in a debate.

if you're already aware that your system doesn't really work, due to this regress problem, why does no one here study the philosophy which has a solution to this problem? (i had the same kind of issue in discussions with others here – they admitted their viewpoint has known flaws but stuck to it anyway. knowing they're wrong in some way wasn't enough to interest them in studying an alternative which claims not to be wrong in any known way – a claim they didn't care to refute.)

This may even be a hard limit on human certainty.

the hard limit is we don't have certainty, we're fallible. that's it. what we have, knowledge, is something else which is (contra over 2000 years of philosophical tradition) different than certainty.

Suppose the theory predicts that an energy of a particle is 0.04 whatever units, and my measurement detected 0.041 units. Does this falsify the theory? Does 0.043, or 0.05, or 0.08? Even when you specify the confidence interval, it is ultimately a probabilistic answer. (And saying "p<0.05" is also just an arbitrary number; why not "p<0.001"?)

you have to make a decision about what standards of evidence you will use for what purpose, and why that's the right thing to do, and expose that meta decision to criticism.

the epistemology issues we're talking about are prior to the physics issues, and don't involve that kind of measurement error issue. we can talk about measurement error after resolving epistemology. (the big picture is that probabilities and statistics have some use in life, but they aren't probabilities of truth/knowledge/certainty, and their use is governed by non-probabilistic judgements/arguments/epistemology.)

see http://curi.us/2067-empiricism-and-instrumentalism and https://yesornophilosophy.com

You can have a "binary" solution only as long as you remain in the realm of words.

no, a problem can and should specify criteria of what the bar is for a solution to it. lots of the problems ppl have are due to badly formulated (ambiguous) problems.

which means you wouldn't feel a 100% certainty after the first reading

i do not value certainty as a feeling. i'm after objective knowledge, not feelings.

Comment author: gjm 20 November 2017 03:18:13AM 2 points [-]

If you're already aware that your system doesn't work, due to this regress problem,

That isn't what Viliam said, and I suggest that here you're playing rhetorical games rather than arguing in good faith. It's as if someone took your fallibilism and your rejection of probability, and said "Since you admit that you could well be wrong and you have no idea how likely it is that you're wrong, why should we take any notice of what you say?".

why does no one here study the philosophy which has a solution to this problem?

You mean "the philosophy which claims to have a solution to this problem". (Perhaps it really does, perhaps not; but all someone can know in advance of studying it is that it claims to have one.)

Anyway, I think the answer depends on what you mean by "study". If you mean "investigate at all" then the answer is that several people here have considered some version of Popperian "critical rationalism", so your question has a false premise. If you mean "study in depth" then the answer is that by and large those who've considered "critical rationalism" have decided after a quick investigation that its claim to have the One True Answer to the problem of induction is not credible enough for it to be worth much further study.

My own epistemic state on this matter, which I mention not because I have any particular importance but because I know my own mind much better than anyone else's, is that I've read a couple of Deutsch's books and some of his other writings and given Deutch's version of "critical rationalism" hours, but not weeks, of thought, and that since you turned up here I've given some further attention to your version; that c.r. seems to me to contain some insights and some outright errors; that I do not find it credible that c.r. "solves" the problem of getting information from observations in any strong sense; that I find the claims made by some c.r. proponents that (e.g.) there is no such thing as induction, or that it is a mistake to assign probabilities to statements that aren't explicitly about random events, even less credible; that the "return on investment" of further in-depth investigation of Popper's or Deutsch's ideas is likely worse than that of other things I could do with the same resources of time and brainpower, not because they're all bad ideas but because I think I already grasp them well enough for my purposes.

the epistemology issues [...] are prior to the physics issues, and don't involve that kind of measurement error issue.

A good epistemology needs to deal with the fact that observations have errors in them, and it makes no sense to try to "resolve epistemology" in a way that ignores such errors. (Perhaps that isn't what you meant by "we can talk about measurement error after resolving epistemology", in which case some clarification would be a good idea.)

What we have, knowledge, is something else which is (contra over 2000 years of philosophical tradition) different than certainty.

You say that as if you expect it to be a new idea around here, but it isn't. See e.g. this old LW article. For the avoidance of doubt, I'm not claiming that what that says about knowledge and certainty is the same as you would say -- it isn't -- nor that what it says is original to its author -- it isn't. Just that distinguishing knowledge from certainty is something we're already comfortable with.

I do not value certainty as a feeling.

You would equally not be entitled to a 100% certainty, or have any other sort of 100% certainty you might regard as more objective and less dependent on feelings. (Because in the epistemic situation Viliam describes, it would be very likely that at least one error had been made.)

Of course, in principle you admit exactly this: after all, you call yourself a fallibilist. But, while you admit the possibility of error and no doubt actually change your mind sometimes, you refuse to try to quantify how error-prone any particular judgement is. I think this is "obviously" a mistake (i.e., obviously when you look at things rightly, which may not be an easy thing to do) and I think Viliam probably thinks the same.

(And when you complain above of an infinite regress, it's precisely about what happens when one tries to quantify these propensities-to-error, and your approach avoids this regress not by actually handling it any better but by simply declaring that you aren't going to try to quantify. That might be OK if your approach handled such uncertainties just as well by other means, but it doesn't seem to me that it does.)

Comment author: curi 09 November 2017 08:03:34PM 0 points [-]

what do you do about ideas which make identical predictions?

Comment author: gjm 09 November 2017 08:36:14PM 0 points [-]

They get identical probabilities -- if their prior probabilities were equal.

If (as is the general practice around these parts) you give a markedly bigger prior probability to simpler hypotheses, then you will strongly prefer the simpler idea. (Here "simpler" means something like "when turned into a completely explicit computer program, has shorter source code". Of course your choice of language matters a bit, but unless you make wilfully perverse choices this will seldom be what decides which idea is simpler.)

In so far as the world turns out to be made of simply-behaving things with complex emergent behaviours, a preference for simplicity will favour ideas expressed in terms of those simply-behaving things (or perhaps other things essentially equivalent to them) and therefore more-explanatory ideas. (It is at least partly the fact that the world seems so far to be made of simply-behaving things with complex emergent behaviours that makes explanations so valuable.)

Comment author: curi 09 November 2017 12:39:54AM 0 points [-]

there are methods for doing Paths Forward with limited resource use. you just don't want to learn/discuss/use them.

Comment author: gjm 09 November 2017 01:19:32PM 3 points [-]

The total of what your "paths forward" page says about limited resources: (1) instead of writing your own answers to every criticism, you can point critics to already-written things that address their criticisms; (2) if you have a suitable forum with like-thinking other people there, they may address the criticisms for you.

Perhaps it seems to you that these make it reasonable to have a policy of addressing every criticism and question despite limited resources. It doesn't seem so to me.

I have read your document, I am not convinced by your arguments that we should attempt to address every single criticism and question, I am not convinced by your arguments that we can realistically do so, and I think the main practical effects of embracing your principles on this point would be (1) to favour obsessive cranks who have nothing else to do with their time than argue about their pet theories, (2) to encourage obsessive-crank-like behaviour, and (3) to make those who embrace them spend more time arguing on the internet. I can't speak for others, but I don't want to give advantages to obsessive cranks, I don't want to become more obsessive and cranky myself, and I think it much more likely that I spend too much time arguing on the internet rather than too little.

I see nothing to suggest that further investigation of "paths forward" is likely to be a productive use of my time.

So: no, I don't want to spend more time learning, discussing, or using "paths forward". I think it would be a suboptimal way to use that time.

Comment author: curi 09 November 2017 02:25:48AM 0 points [-]

Eliezer has already indicated [1] he'd prefer to take administrative action to prevent discussion than speak to the issues. No Paths Forward there!

[1] http://lesswrong.com/lw/56m/theconjunctionfallacydoesnot_exist/3wf5

Comment author: gjm 09 November 2017 11:50:30AM 3 points [-]

That's ... not a very accurate way of describing what happened. Not because there's literally no way to understand it that makes it factually correct, but because it gives entirely the wrong impression.

Here's a more complete description of what happened.

curi came here in early April 2011 (well, he actually first appeared earlier, but before then he made a total of three comments ever) and posted five lengthy top-level posts in five days. They were increasingly badly received by the community, getting scores of -1,-1,-1,-22,-38. The last one was entitled "The conjunction fallacy does not exist" and what it attempted to refute was a completely wrong statement of what the c.f. is about, namely the claim (which no one believes) that "people attribute higher probability to X&Y than to Y" for all X and Y.

As this was happening, more and more of the comments on curi's posts were along the general lines of this one saying, in essence: This is not productive, you are just repeating the same wrong things without listening to criticism, so please stop.

It was suggested that there was some reason to think curi was using sockpuppets to undo others' downvotes and keep enough karma to carry on posting.

And then, in that context, curi's fifth post -- which attempted to refute the conjunction fallacy but which completely misunderstood what the conjunction fallacy is, and which was sitting on -38 points -- was removed.

Now, maybe that's because Eliezer was afraid of curi's ideas and wanted to close down discussion or something of the sort. But a more plausible explanation is that he thought further discussion was likely to be a waste of time for the same reason as several commenters.

I don't think removing the post was a good decision, and generally I think Eliezer's moderation has been too heavy-handed on multiple occasions. But I don't think the kind of explanation curi is offering for this is at all likely to be correct.

On the other hand, if curi is merely saying that Eliezer is unlikely to be interested if curi contacts him and asks for a debate on Bayes versus CR, then I think he's clearly right about that.

Comment author: curi 09 November 2017 12:11:09AM 0 points [-]

if someone spoke for something smaller than LW, e.g. Bayesian Epistemology, that'd be fine. CR and Objectivism, for example, can be questioned and have people who will answer (unlike science itself).

and if someone wanted to take responsibility for gjm-LW or lumifer-LW or some other body of ideas which is theirs alone, that'd be fine too. but people aren't doing this as a group or individually!

Comment author: gjm 09 November 2017 12:37:20AM 0 points [-]

Well, both Lumifer and I have (mostly in different venues) been answering a lot of questions and criticisms you've posed. But no, I don't think either of us feels "responsibility" in the specific (and, I think, entirely non-standard) sense you're using here, where to "take responsibility" for a set of ideas is to incur a limitless obligation to answer any and all questions and criticisms made of those ideas.

Comment author: RealJustinCEO 08 November 2017 09:55:56PM 1 point [-]

In my understanding, there’s no one who speaks for LW, as its representative, and is responsible for addressing questions and criticisms. [...] No one is responsible for defining an LW school of thought and dealing with intellectual challenges.

Correct. There is no Pope of LW, we don't all agree about everything, and no one has any obligation to answer anyone else's objections.

Asking for someone who thinks some set of ideas is consistent and true, and will address questions about those ideas thoroughly, is not asking for a Pope. It's more like asking for someone who's more than an casual fan.

That may be inconvenient for some purposes, but that's how it is.

One purpose that the lack of a serious LW advocate is "inconvenient" for is truth-seeking - a rather important case!

Comment author: gjm 09 November 2017 12:06:48AM 2 points [-]

Asking for someone who [...] is not asking for a Pope.

No. But curi was asking for more than that. E.g., he wants someone who "speaks for LW". He wants them to do it "as [LW's] representative". He wants them to address arguments against LWish ideas "canonically". He wants someone "responsible for defining an LW school of thought". And so forth.

And, as I said above, this is just not how most communities or schools of thought work, nor should it be, nor I think could it be. Except for ones where in order to claim any sort of affiliation you are required to sign up to a particular body of doctrine. That mostly means religions, political parties, etc. And (again, as I said above) groups of that sort don't have an encouraging record of successfully distinguishing truth from error; I don't think we should be emulating them.

View more: Next