Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 30 November 2017 02:12:47AM 1 point [-]

I still have no idea what "hostile to using references" is meant to mean.

It means you're unwilling to go to curi's website and read all he has written on the topic when he points you there.

Comment author: gjm 30 November 2017 05:32:47PM *  0 points [-]

Maybe. Though actually I have gone to curi's website (or, rather, websites; he has several) and read his stuff, when it's been relevant to our discussions. But, y'know, I didn't accept Jesus into my life^W^W^W^W the Paths Forward approach, and therefore there's no point trying to engage with me on anything else.

[EDITED to add:] Am I being snarky? Why, yes, I am being snarky. Because I spent hours attempting to have a productive discussion with this guy, and it turned out that he wasn't prepared to do that unless he got to set every detail of the terms of discussion. And also because he took all the discussions he'd had on the LW slack and published them online without anyone's consent (in fact, he asked at least one person "is it OK to post this somewhere else?" and got a negative answer and still did it). For the avoidance of doubt, so far as I know there's nothing particularly incriminating or embarrassing in any of the stuff he posted, but of course the point is that he doesn't get to choose what someone else might be unwilling to have posted in a public place.

Comment author: jimrandomh 25 November 2017 03:18:04AM 9 points [-]

No, you do not get to publicly demand an in-depth discussion of the philosophy of induction from a specific, small group of people. You can raise the topic in a place where you know they hang out and gesture in their direction. But what you're doing here is trying to create a social obligation to read ten thousand words of your writing. With your trademark in capital letters in every other sentence. And to write a few thousand words in response. From my outside perspective, engaging in this way looks like it would be a massive unproductive time sink.

Comment author: gjm 26 November 2017 07:49:14PM 3 points [-]

It's worse than that. I tried to have a discussion of the philosophy of induction with him (over on the slack). He took exception to some details of how I was conducting myself, essentially because I wasn't following his "Paths Forward" methodology, and from that point on he wasn't interested in discussing the philosophy of induction.

So in effect he's publicly demanding an in-depth discussion of the philosophy of induction according to whatever idiosyncratic standards of debate he decides to set up from a specific small group of people.

Comment author: jmh 16 November 2017 07:20:24PM 0 points [-]

While not addressing the question of a role for AI I often find myself thinking we should get away for the frequent trading of financial assets and make them a bit more like the trading of mutual funds. Does all the intra-day trades really give more information or just add noise and the opportunity for the insiders to make money off retail (and even some institutional) investors?

Seem like designing the market to work a bit more like the one often used in the Econ 101 theory -- that Walrasian Auctioneer -- we could have more stable markets that do better at pricing capital assets than today. In other words, take all the order flow see that the prices are to clear and then all trade occurs at that price.

I suspect you'd still see some gaming the system with fake orders (a bit like the algos have been accused of in today's markets) but all systems get gamed.

Comment author: gjm 23 November 2017 12:46:11PM 0 points [-]

This would have the consequence that if you see that XXXX is trading at $Y, phone up your broker and ask to sell your holding in $XXXX, it could very well end up selling at $Y/2.

That's a thing that can happen already, but the delay between saying "sell" and actually selling is typically measured in seconds rather than hours, which makes big divergences like that less likely.

Of course you can avoid this by not saying "please sell 100 shares of XXXX" but "please sell 100 shares of XXXX unless the price drops below $Z, in which case don't". But this is more complexity than most retail investors want to handle :-).

Comment author: gjm 23 November 2017 12:40:02PM 0 points [-]

At one point in that discussion curi says the following, about me:

and then he was hostile to concepts like keeping track of what points he hadn't answered or talking about discussion methodology itself. he was also, like many people, hostile to using references.

I'd just like to say, for the record, that that is not an accurate characterization of my opinion or attitudes, and I do not believe it is an accurate characterization of my words either. What is true is that we'd been talking about various Popperish things, and then curi switched to only wanting to talk about my alleged deficiencies in rational conduct and about his "Paths Forward" methodology. I wasn't interested in discussing those (I've no general objection to talking about discussion methodology, but I didn't want to have that conversation with curi on that occasion) and he wasn't willing to discuss anything else.

I still have no idea what "hostile to using references" is meant to mean.

Comment author: IlyaShpitser 22 November 2017 03:24:07PM *  0 points [-]

Yeah, credentials are a poor way of judging things.

They are not, though. It's standard "what LW calls 'Bayes' and what I call 'reasoning under uncertainty'" -- you condition on things associated with the outcome, since those things carry information. Outcome (O) -- having a clue, thing (C) -- credential. p(O | C) > p(O), so your credence in O should be computed after conditioning on C, on pain of irrationality. Specifically, the type of irrationality where you leave information on the table.


You might say "oh, I heard about how argument screens authority." This is actually not true though, even by "LW Bayesian" lights, because you can never be certain you got the argument right (or the presumed authority got the argument right). It also assumes there are no other paths from C to O except through argument, which isn't true.

It is a foundational thing you do when reasoning under uncertainty to condition on everything that carries information. The more informative the thing, the worse it is not to condition on it. This is not a novel crazy thing I am proposing, this is bog standard.


The way the treatment of credentialism seems to work in practice on LW is a reflexive rejection of "experts" writ large, except for an explicitly enumerated subset (perhaps ones EY or other "recognized community thought leaders" liked).

This is a part of community DNA, starting with EY's stuff, and Luke's "philosophy is a diseased discipline."

That is crazy.

Comment author: gjm 22 November 2017 05:16:07PM 0 points [-]

They are not, though.

Actually, I somewhat agree, but being an agreeable sort of chap I'm willing to concede things arguendo when there's no compelling reason to do otherwise :-), which is why I said "Yeah, credentials are a poor way of judging things" rather than hedging more.

More precisely: I think credentials very much can give you useful information, and I agree with you that argument does not perfectly screen off authority. On the other hand, I agree with prevailing LW culture (perhaps with you too) that credentials typically give you very imperfect information and that argument does somewhat screen off authority. And I suggest that how much credentials tell you may vary a great deal by discipline and by type of credentials. Example: the Pope has, by definition, excellent credentials of a certain kind. But I don't consider him an authority on whether any sort of gods exist because I think the process that gave him the credentials he has isn't sufficiently responsive to that question. (On the other hand, that process is highly responsive to what Catholic doctrine is and I would consider the Pope a very good authority on that topic even if he didn't have the ability for control that doctrine as well as reporting it.)

It seems to me that e.g. physics has norms that tie its credentials pretty well (though not perfectly) to actual understanding and knowledge; that philosophy doesn't do this so well; that theology does it worse; that homeopathy does it worse still. (This isn't just about the moral or cognitive excellence of the disciplines in question; it's also that it's harder to tell whether someone's any good or not in some fields than in others.)

Comment author: curi 20 November 2017 09:15:59PM *  0 points [-]

Here's a tricky example of judging authority (credentials). You say listen to SA about QM. Presumably also listen to David Deutsch (DD), who knows more about QM than SA does. But what about me? I have talked with DD about QM and other issues at great length and I have a very accurate understanding of what things I cay say about QM (and other matters) that are what DD would say, and when I don't know something or disagree with DD. (I have done things like debate physics, with physicists, many times, while being advised by DD and him checking all my statements so I find out when I have his views right or not.) So my claims about QM are about as good as DD's, when I make them – and are therefore even better than SA's, even though I'm not a physicist. Sorta, not exactly. Credentials are complicated and such a bad way to judge ideas.

What I find most people do is decide what they want to believe or listen to first, and then find an expert who says it second. So if someone doesn't want to listen, credentials won't help, they'll just find some credentials that go the other way. DD has had the same experience repeatedly – people aren't persuaded due to his credentials. That's one of the main reasons I'm here instead of DD – his credentials wouldn't actually help with getting people here to listen/understand. And, as I've been demonstrating and DD and I already knew, arguments aren't very effective here either (just like elsewhere).

And I, btw, didn't take things on authority from DD – I asked questions and brought up doubts and counter-arguments. His credentials didn't matter to me, but his arguments did. Which is why he liked talking with me!

Comment author: gjm 21 November 2017 10:18:25PM 0 points [-]

Yeah, credentials are a poor way of judging things. But that first paragraph doesn't show remotely what you think it does.

Some of David Deutsch's credentials that establish him as a credible authority on quantum mechanics: He is a physics professor at a leading university, a Fellow of the Royal Society, is widely recognized as a founder of the field of quantum computation, and has won some big-name prizes awarded to eminent scientists.

Your credentials as a credible authority on quantum mechanics: You assure us that you've talked a lot with David Deutsch and learned a lot from him about quantum mechanics.

This is not how credentials work. Leaving aside what useful information (if any) they impart: when it comes to quantum mechanics, David Deutsch has credentials and you don't.

It's not clear to me what argument you're actually making in that first paragraph. But it seems to begin with the claim that you have good credentials when it comes to quantum mechanics for the reasons you recite there, and that's flatly untrue.

Comment author: curi 21 November 2017 04:13:57AM 0 points [-]

you have openly stated your unwillingness to

1) do PF

2) discuss PF or other methodology

that's an impasse, created by you. you won't use the methodology i think is needed for making progress, and won't discuss the disagreement. a particular example issue is your hostility to the use of references.

the end.

I am very willing to have a conversation.

given your rules, including the impasse above.

Comment author: gjm 21 November 2017 01:37:35PM 2 points [-]

you have openly stated your unwillingness [...]

Yup. I'm not interested in jumping through the idiosyncratic set of hoops you choose to set up.

that's an impasse, created by you.

Curiously, I find myself perfectly well able to conduct discussions with pretty much everyone else I encounter, including people who disagree with me at least as much as you do. That would be because they don't try to lay down a bunch of procedural rules and refuse to engage unless I either follow their rules or get sidetracked onto a discussion of those rules. So ... nah, I'm not buying "created by you". I'm not the one who tried to impose the absurdly over-demanding set of procedural rules on a bunch of other people.

your hostility to the use of references

You just made that up. I am not hostile to the use of references.

(Maybe I objected to something you did that involved the use of references; I don't remember. But if I did, it wasn't because I am hostile to the use of references.)

Comment author: curi 20 November 2017 03:57:00AM 0 points [-]

you haven't cared to try to write down, with permalink, any errors in CR that you think could survive critical scrutiny.

by study i mean look at it enough to find something wrong with it – a reason not to look further – or else keep going if you see no errors. and then write down what the problem is, ala Paths Forward.

the claims made by some c.r. proponents

it's dishonest (or ignorant?) to refer to Popper, Deutsch and myself (as well as Miller, Bartley, and more or less everyone else) as "some c.r. proponents".

you refuse to try to quantify how error-prone any particular judgement is.

no. i have tried and found it's impossible, and found out why (arguments u don't wish to learn).

anyway i don't see what your comment is supposed to accomplish. you have 1.8 of your feet out the door. you aren't really looking to have a conversation to resolve the matter. why speak at all?

Comment author: gjm 21 November 2017 03:29:29AM *  0 points [-]

you haven't cared to [...]

Correct: I am not interested in jumping through the idiosyncratic set of hoops you choose to set up.

it's dishonest (or ignorant?) [...]

Why?

arguments you don't wish to learn

Don't wish to learn them? True enough. I don't see your relationship to me as being that of teacher to learner. I'd be interested to hear what they are, though, if you could drop the superior attitude and try having an actual discussion.

I don't see what your comment is supposed to accomplish.

It is supposed to point out some errors in things you wrote, and to answer some questions you raised.

you have 1.8 of your feet out the door.

Does that actually mean anything? If so, what?

you aren't really looking to have a conversation to resolve the matter.

I am very willing to have a conversation. I am not interested in straitjacketing that conversation with the arbitrary rules you keep trying to impose ("paths forward"), and I am not interested in replacing the (to me, potentially interesting) conversation about probability and science and reasoning and explanation and knowledge with the (to me, almost certainly boring and fruitless) conversation about "paths forward" that you keep trying to replace it with.

why speak at all?

See above. You said some things that I think are wrong, and you asked some questions I thought I could answer. It's not my problem that you're unable or unwilling to address any of the actual content of what I say and only interested in meta-issues.

[EDITED because I noticed I wrote "conservation" where I meant "conversation" :-)]

Comment author: curi 20 November 2017 01:41:40AM 0 points [-]

Does it make sense to say that the probability of making the mistake in the judgment B is higher than the probability of making the mistake in the judgment A?

It may or may not make sense, depending on terminology and nuances of what you mean, for some types of mistakes. Some categories of error have some level of predictability b/c you're already familiar with them. However, it does not make sense for all types of mistakes. There are some mistakes which are simply unpredictable, which you know nothing about in advance. Perhaps you can partly, in some way, see some mistakes coming – but that doesn't work in all cases. So you can't figure out any overall probability of some judgement being a mistake, because at most you have a probability which addresses some sources of mistakes but others are just unknown (and you can't combine "unknown" and "90%" to get an overall probability).

I am a fallibilist who thinks we can have neither 100% certainty nor 90% certainty nor 50% certainty. There's always framework questions too – e.g. you may say according to your framework, given your context, then you're unlikely (20%) to be mistaken (btw my main objections remain the same if you stop quantifying certainty with numbers). But you wouldn't know the probability your framework has a mistake, so you can't get an overall probability this way.

Difficult to do, and even more difficult to justify in a debate.

if you're already aware that your system doesn't really work, due to this regress problem, why does no one here study the philosophy which has a solution to this problem? (i had the same kind of issue in discussions with others here – they admitted their viewpoint has known flaws but stuck to it anyway. knowing they're wrong in some way wasn't enough to interest them in studying an alternative which claims not to be wrong in any known way – a claim they didn't care to refute.)

This may even be a hard limit on human certainty.

the hard limit is we don't have certainty, we're fallible. that's it. what we have, knowledge, is something else which is (contra over 2000 years of philosophical tradition) different than certainty.

Suppose the theory predicts that an energy of a particle is 0.04 whatever units, and my measurement detected 0.041 units. Does this falsify the theory? Does 0.043, or 0.05, or 0.08? Even when you specify the confidence interval, it is ultimately a probabilistic answer. (And saying "p<0.05" is also just an arbitrary number; why not "p<0.001"?)

you have to make a decision about what standards of evidence you will use for what purpose, and why that's the right thing to do, and expose that meta decision to criticism.

the epistemology issues we're talking about are prior to the physics issues, and don't involve that kind of measurement error issue. we can talk about measurement error after resolving epistemology. (the big picture is that probabilities and statistics have some use in life, but they aren't probabilities of truth/knowledge/certainty, and their use is governed by non-probabilistic judgements/arguments/epistemology.)

see http://curi.us/2067-empiricism-and-instrumentalism and https://yesornophilosophy.com

You can have a "binary" solution only as long as you remain in the realm of words.

no, a problem can and should specify criteria of what the bar is for a solution to it. lots of the problems ppl have are due to badly formulated (ambiguous) problems.

which means you wouldn't feel a 100% certainty after the first reading

i do not value certainty as a feeling. i'm after objective knowledge, not feelings.

Comment author: gjm 20 November 2017 03:18:13AM 2 points [-]

If you're already aware that your system doesn't work, due to this regress problem,

That isn't what Viliam said, and I suggest that here you're playing rhetorical games rather than arguing in good faith. It's as if someone took your fallibilism and your rejection of probability, and said "Since you admit that you could well be wrong and you have no idea how likely it is that you're wrong, why should we take any notice of what you say?".

why does no one here study the philosophy which has a solution to this problem?

You mean "the philosophy which claims to have a solution to this problem". (Perhaps it really does, perhaps not; but all someone can know in advance of studying it is that it claims to have one.)

Anyway, I think the answer depends on what you mean by "study". If you mean "investigate at all" then the answer is that several people here have considered some version of Popperian "critical rationalism", so your question has a false premise. If you mean "study in depth" then the answer is that by and large those who've considered "critical rationalism" have decided after a quick investigation that its claim to have the One True Answer to the problem of induction is not credible enough for it to be worth much further study.

My own epistemic state on this matter, which I mention not because I have any particular importance but because I know my own mind much better than anyone else's, is that I've read a couple of Deutsch's books and some of his other writings and given Deutch's version of "critical rationalism" hours, but not weeks, of thought, and that since you turned up here I've given some further attention to your version; that c.r. seems to me to contain some insights and some outright errors; that I do not find it credible that c.r. "solves" the problem of getting information from observations in any strong sense; that I find the claims made by some c.r. proponents that (e.g.) there is no such thing as induction, or that it is a mistake to assign probabilities to statements that aren't explicitly about random events, even less credible; that the "return on investment" of further in-depth investigation of Popper's or Deutsch's ideas is likely worse than that of other things I could do with the same resources of time and brainpower, not because they're all bad ideas but because I think I already grasp them well enough for my purposes.

the epistemology issues [...] are prior to the physics issues, and don't involve that kind of measurement error issue.

A good epistemology needs to deal with the fact that observations have errors in them, and it makes no sense to try to "resolve epistemology" in a way that ignores such errors. (Perhaps that isn't what you meant by "we can talk about measurement error after resolving epistemology", in which case some clarification would be a good idea.)

What we have, knowledge, is something else which is (contra over 2000 years of philosophical tradition) different than certainty.

You say that as if you expect it to be a new idea around here, but it isn't. See e.g. this old LW article. For the avoidance of doubt, I'm not claiming that what that says about knowledge and certainty is the same as you would say -- it isn't -- nor that what it says is original to its author -- it isn't. Just that distinguishing knowledge from certainty is something we're already comfortable with.

I do not value certainty as a feeling.

You would equally not be entitled to a 100% certainty, or have any other sort of 100% certainty you might regard as more objective and less dependent on feelings. (Because in the epistemic situation Viliam describes, it would be very likely that at least one error had been made.)

Of course, in principle you admit exactly this: after all, you call yourself a fallibilist. But, while you admit the possibility of error and no doubt actually change your mind sometimes, you refuse to try to quantify how error-prone any particular judgement is. I think this is "obviously" a mistake (i.e., obviously when you look at things rightly, which may not be an easy thing to do) and I think Viliam probably thinks the same.

(And when you complain above of an infinite regress, it's precisely about what happens when one tries to quantify these propensities-to-error, and your approach avoids this regress not by actually handling it any better but by simply declaring that you aren't going to try to quantify. That might be OK if your approach handled such uncertainties just as well by other means, but it doesn't seem to me that it does.)

Comment author: curi 09 November 2017 08:03:34PM 0 points [-]

what do you do about ideas which make identical predictions?

Comment author: gjm 09 November 2017 08:36:14PM 0 points [-]

They get identical probabilities -- if their prior probabilities were equal.

If (as is the general practice around these parts) you give a markedly bigger prior probability to simpler hypotheses, then you will strongly prefer the simpler idea. (Here "simpler" means something like "when turned into a completely explicit computer program, has shorter source code". Of course your choice of language matters a bit, but unless you make wilfully perverse choices this will seldom be what decides which idea is simpler.)

In so far as the world turns out to be made of simply-behaving things with complex emergent behaviours, a preference for simplicity will favour ideas expressed in terms of those simply-behaving things (or perhaps other things essentially equivalent to them) and therefore more-explanatory ideas. (It is at least partly the fact that the world seems so far to be made of simply-behaving things with complex emergent behaviours that makes explanations so valuable.)

View more: Next