BrianScurfield comments on Taking Ideas Seriously - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (257)
Ideas that should be taken more seriously by Less Wrong:
Gee, I wonder what philosopher of science you have been reading. :)
I would suggest that you read through the sequences with an open mind - particularly on your point #4. If you find it impossible to open your mind on that point, then open it to the possibility that the word "probability" can have two different meanings and that your point #4 only applies to one of them. If you find it impossible to open your mind to the possibility that a word might have an alternative meaning which you have not yet learned, then please go elsewhere.
Regarding Popper, it is not so much that he is wrong, as that he is obsolete. We think we have learned that set of lessons and moved on to the next set of problems.
If you have already begun reading the sequences, and were motivated to give us this dose of Popper because Eliezer's naive realism got on your nerves, well ... All I can say is that it got on my nerves too, but if you keep reading you will find that EY is not nearly as epistemologically naive as it might seem in the early sequence postings.
No Popper is not obsolete and clearly the lessons of Popper have not been learnt by many: consider the people who have not yet understood that induction is a myth. Consider, also, the people who constantly misrepresent what Popper said like saying his philosophy is falsificationism or that he was a positivist or that he snuck induction in via the back door (you can find examples of these kind of mistakes discussed here). Popper's ideas are in fact difficult for most people - they blow away the whole justificationist meta-context, a meta-context that permeates most people's thinking. Understanding Popper requires that you take him seriously. David Deutsch did that and expanded on Popper's ideas in a number of ways (you may be interested in a new book he has coming out called "The Beginning of Infinity"). He is another philosopher I follow closely. As is Elliot Temple (www.curi.us).
Thanks for the links and references. I will look into them. I urge you once more to work your way through the sequences. It appears you have something to teach us, but I doubt that you will be very successful until you have learned the local jargon, and become sufficiently familiar with our favorite examples to use them against us.
However, I have to say that I was a bit disconcerted by this:
Now if you told me that the standard definition of induction misrepresents the evidence-collection process, or that you know how to dissolve the problem of induction, well then I would be all ears. But when you say that "induction is a myth" I hear that as saying that everyone who has thought seriously on the topic, from Hume to the present, ..., well, you seem to be saying that all those smart people were as deluded as the medieval philosophers who worried about angels dancing on the heads of pins.
See the thing is, I would have to keep having to upvote such arrogance and stupidity, just so the comment to which I am responding doesn't disappear. And I don't want to do that.
You do realize that Hume held that induction cannot be logically justified? He noticed there is a "problem of induction". That problem was exploded by Karl Popper. Have you read what he has to say and taken seriously his ideas? Have you read and taken seriously the ideas of philosophers like David Deutsch, David Miller, and Bill Bartley? They all agree with Popper that:
Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure - Karl Popper (Conjectures & Refutations, p 70).
Of course. That is why I mentioned him.
"Exploded". My! What violent imagery. I usually prefer to see problems "dissolved". Less metaphorical debris. And yes, I've read quite a bit of Popper, and admire much of it.
Nope, I haven't.
You know, when giving page citations in printed texts, you should specify the edition. My 1965 Harper Torchbook paperback edition does not show Popper saying that on p 70. But, no matter.
One of the few things I dislike about Popper is that he doesn't seem to understand statistical inference. I mean, he is totally clueless on the subject. It is not just that he isn't a Bayesian - it seems he doesn't "get" Pearson and Fisher either. Well, no philosopher gets everything right. But if he really thinks that "inference based on many observations" cannot happen - not just that it is frequently done wrong, but rather that it is impossible - then all I can say is that this is not one of Sir Karl's better moments.
And if what he means is simply that we cannot infer absolute general truths from repeated observations, then I have to call him a liar for suggesting that anyone else ever suggested that we could make such inferences.
But, since you have been recommending philosophers to me, let me recommend some to you. I. J. Good is fun. Richard Jeffrey is not bad either. E.T. Jaynes explains quite clearly how one makes inferences based on observations - one observation or many observations. You really ought to look at Jaynes before coming to this forum to lecture on epistemology.
Perhaps you should know I have published papers where I have used Bayes extensively. I am well familiar with the topic (edit: though this doesn't make me any kind of infallible authority). I was once enthusiastic about Bayesian epistemology myself. I now see it as sterile. Popperian epistemology - especially as extended by David Deutsch - is where I see fertile ground.
Cool. But more to the point, have you published, or simply written, any papers in which you explain why you now see it as sterile? Or would you care to recommend something by Deutsch which reveals the problems with Bayesianism. Something that actually takes notice of our ideology and tries to refute it will be received here much more favorably than mere diffuse enthusiasm for Popper.
The quote is from 3rd ed. 1968. You say you have read Popper, then you should not be surprised by the quote. Your response above is just the argument from incredulity. Do you have a better criticism?
I'm not surprised by the quote. I just couldn't find it. It apparently wasn't in 2nd edition. But my 2nd edition index had many entries for "induction, myth of _" so I don't doubt you at all that Popper actually said it.
I am incredulous because I know how to do inference based on a single observation, as well as inference based on many. And so does just about everyone who posts regularly at this site. It is called Bayesian inference, and is not really all that difficult. Even you could do it, if you were to simply set aside your prejudice that
I have already provided references. You can find thousands more by Googling.
OK, tell me how you know in advance of having any theory what to observe?
BTW, please don't assume things about me like asserting I hold prejudices. The philosophical position I come from is a full blown one. - it is no mere prejudice. Also, I'm quite willing to change my ideas if they are shown to be wrong.
Ok, I won't assume that you believe, with Popper whom you quote, that inference based on many observations is impossible. I will instead assume that Popper is using the word "inference" very differently than it is used around here. And since you claim to be an ex-Bayesian, I will assume you know how the word is used here. Which makes your behavior up until now pretty inexplicable, but I will make no assumptions about the reasons for that.
Likewise, please do not assume that I believe that observation is neither theory-laden nor theory-directed. As it happens, I do not know in advance of a theory what to observe.
Of course, the natural thing for me to do now would be to challenge you to explain where theories come from in advance of observation. But why don't we both just grow up?
If you have a cite for a careful piece of reasoning which will cause us to drop our Bayesian complaisancy and re-embrace Popper, please provide it and let us read the text in peace.
A better phrasing for that might have been "certain knowledge is a myth." What cannot be logically justified is reasoning from particular observations to certainty in universal truths. You're commenting as if you are unaware of the positions and arguments linked from my previous reply, and perhaps Where Recursive Justification Hits Bottom . You have intelligent things to say, but you're not going to be taken seriously here if you're not aware of the pre-existing intelligent responses to them popular enough to amount to public knowledge.
No, that is not equivalent. Popper wrote that "inference based on many observations is a myth". He is saying that we never reason from observations, never mind reasoning to certainty. In order to observe, you need theories. Without those, you cannot know what things you should observe or even make sense of any observation. Observation enables us to test theories, it never enables us to construct theories. Furthermore, Popper throws out the whole idea of justifying theories. We don't need justification at all to progress. Judging from Where Recursive Justification Hits Bottom, this is something Eliezer has not fully taken on board (though I may be wrong). He sees the problem of the tu-quoque, but he still says [e]verything, without exception, needs justification. No, nothing can be justified. Knowledge advances not positively by justifying things but negatively by refuting things. Eliezer does see the importance of criticism, but my impression is that he doesn't know Popper well enough.
For Yudkowsky on Popper, start here:
"Previously, the most popular philosophy of science was probably Karl Popper's falsificationism - this is the old philosophy that the Bayesian revolution is currently dethroning."
...and keep reading - at least as far as:
"On the other hand, Popper's idea that there is only falsification and no such thing as confirmation turns out to be incorrect. Bayes' Theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued."
Yudhowsky gets a lot wrong even in a few sentences:
First, Popper's philosophy cannot be accurately described as falsificationism - that is just a component of it, and not the most important component. Popperian philosophy consists of many inter-related ideas and arguments. Yudhowsky makes an error that Popperian newbies make. One suspects from this that Yudhowsky is making himself out to be more familiar with Popper than he actually is. His claim to be dethroning Popper would then be dishonest as he does not have detailed knowledge of the rival position. Also he is wrong that Popper is popular: he isn't. Furthermore, Popper is familiar with Bayesian epistemology and actually discusses it in his books. So calling Popper's philosophy old and making out that Bayesian epistemology is new is wrong also.
Popper never said theories can be definitely falsified. He was a thoroughgoing fallibilist and viewed falsifications as fallible conjectures. Also he said that theories can never be confirmed at all, not that they can be partially or probabilistically confirmed, which the above sentence suggests he said. Saying falsification is a special case of the Bayesian rules also doesn't make sense: falsification is anti-induction whereas Bayesian epistemology is pro-induction.
Further comments on Yudhowski's explanation of Bayes:
Science revolves around explanation and criticism. Most scientific ideas never get to the point of testing (which is a form of criticism), they are rejected via criticism alone. And they are rejected because they are bad explanations. Why is the emphasis in the quote solely on evidence? If science is a special case of Bayes, shouldn't Bayes have something to say about explanation and criticism? Do you assign probabilities to criticism? That seems silly. Explanations and criticism enable us to understand things and to see why they might be true or false. Trying to reduce things to probabilities is to completely ignore the substance of explanations and criticisms. Instead of trying to get a probability that something is true, you should look for criticisms. You accept as tentatively true anything that is currently unproblematic and reject as tentatively false anything that is currently problematic. It's a boolean decision: problematic or unproblematic.
Both bayesian induction (as we currently know it) and Popper fail my test for a complete epistemology.
The test is simple. Can I use the description of the formalism to program a real computer to do science? And it should, in theory, be able to bootstrap itself from no knowledge of science to our level.
I think that the contribution that Bayesian methodology makes toward good criticism of a scientific hypothesis is that to "do the math", you need to be able to compute P(E|H). If H is a bad explanation, you will notice this when you try to determine (before you see E) how you would go about computing P(E|H). Alternately, you discover it when you try to imagine some E such that P(E|H) is different from P(E|not H).
No, you don't assign probabilities to criticisms, as such. But I do think that every atomic criticism of a hypothesis H contains at its heart a conditional proposition of the form (E|H) or else a likelihood odds ratio P(E|H)/P(E|not H) together with a challenge, "So how would you go about calculating that?"
Incidentally, you also ought to look at some of the earlier postings where EY was, in effect, using naive Bayes classifiers to classify (i.e. create ontologies), rather than using Bayes's theorem to evaluate hypotheses that predict. Also take a look at Pearl's book to get a modern Bayesian view of what explanation is all about.
If you were asked to bet on whether it was true or not, then you should assign a probability.
Scientists often do something like that when deciding how to allocate their research funds.
I like this point a lot. But it seems very convenient and sensible to say that some things are more problematic than others. And at least for certain kinds of claims it's possible to quantify how problematic they are with numbers. This leads one (me at least) to want a formalism -- for handling beliefs -- that involves numbers, and Bayesianism is a good one.
What's the conjectures-and-refutations way of handling claims like "it's going to snow in February"? Do you think it's meaningless or useless to attach a probability to that claim?
More from Yudkowsky on the philosophy of science:
http://lesswrong.com/lw/ig/i_defy_the_data/
The chance of a criticism being correct can unproblematically be assigned a probability.
Popper obviously hadn't read Wikipedia:
http://en.wikipedia.org/wiki/Inductive_reasoning
In what sense do you mean this exactly, and what evidence for it do you have? I've spoken to people like Elliot, but all they said was things like 'humans can function as a Turing Machine by laboureously manipulating symbols'. Which is nice, but not really relevant to anything in real-time.
On a more general note, you should probably try to be a little clearer: 'conjectures and refutations' doesn't really pick out any particular strategy from strategy-space, and neither does the phrase 'explanation' pick out anything in particular. Additionally, 'induction' is sufficiently different from what people normally think of as myths that it could do with some elaboration.
Similarly, some of these issues we do take seriously; we know we're fallible, and it sounds like you don't know what we mean by probability.
Finally, welcome to Less Wrong!
Edit: People, don't downvote the parent; there's no reason to scare the newbies.
Where 'real-time' can be taken literally to refer to time that is expected to exist in physics models of the universe.
Another way of saying it is that human beings can solve any problem that can be solved. Does that help?
Careful here - as I mentioned above, evidence never supports a theory, it just provides a ready stock of criticisms of rival theories. Let me give you an argument: If you hold that human beings are not universal knowledge creators, then you are saying that human knowledge creation processes are limited in some way, that there is some knowledge we cannot create. You are saying that humans can create a whole bunch of knowledge but whole realms of other knowledge are off limits to us. How does that work? Knowledge enables us to expand our abilities and that in turn enables us to create new knowledge and so on. Whatever this knowledge we can't create is, it would have to be walled off from all this other expanding knowledge in a rather special way. How do you build a knowledge creation machine that only has the capability to create some knowledge? That would seem much much more difficult than creating a fully universal machine.
I don't know what point Elliot was answering here, but I guess he is saying that humans are universal Turing Machines and illustrating that. He is saying that humans are universal in the sense that they can compute anything that can be computed. That is a different notion of universality to the one under discussion here (though there is a connection between the two types of universality). Elliot agrees that humans are universal knowledge creators and has written a lot about it (see, for example, his posts on The Fabric of Reality list).
'Conjectures and refutations' is an evolutionary process. The general methodology (or strategy, if you prefer) is: When faced with a problem try to come up with conjectural explanations to solve the problem and then criticise them until you find one (and only one) that cannot be knocked down by any known criticism. Take that as your tentative solution. I guess what you are looking for is an explanation of how human conjecture engines work? That is an unsolved problem. We do know some things, eg: no induction is involved.
Explanations are valuable: they help you understand something. Are you looking for an explanation of how we generate "explanations"? Again, unsolved problem.
It's not really different. It's something that people believe is true that in fact isn't. Hume was the first to realize that there was a "problem of induction" and philosophers have for years and years been trying to justify induction. It took Karl Popper to realize that induction isn't actually how we create knowledge at all: induction is a myth.
Yes, you are called "Less Wrong" after all! I was off-beam with that.
Actually, I am quite familiar with the Bayesian conception of probability. I just don't think probability has a role in the realm of epistemology. Evidence does not make a theory more probable, not even from a subjective point of view. What evidence does, as I have said, is provide a stock of criticisms against rival theories. Also, evidence only goes so far: what really matters is how theories stand up to criticism as explanations. Evidence plays a role in that. I am quite happy to talk about the probability of events in the world, but events are different from explanatory theories. Apples and oranges.
Of course evidence makes theories more probable:
Imagine you have two large opaque bags full of beans, one 50% black beans and 50% white beans and the other full of white beans. The bags are well shaken, you are given one bag at random. You take out 20 beans - and they are all white.
That is clearly evidence that confirms the hypothesis that you have the bag full of white beans. If you had the "mixed" bag, that would only happen one time in a million.
Notice that the counterfactual event is possible (that you have the mixed bag). And even if you hold the bag full of white beans, the counterfactual event that you hold the mixed beans does occur elsewhere in the multiverse. This is what distinguishes events from theories. A false theory never obtains anywhere: it is simply false. So a theory being true or false is not at all like the situation with counterfactual events. You cannot assign anything objective to a false theory.
The actual theory you hold in your example is approximately the following: I have made a random selection from a bag and I know that I have been given one of two bags: one 50% black beans and 50% white beans and the other full of white beans and: I have been honestly informed about the setup, am not being tricked, no mistakes have been made etc. This theory predicts that if I take 20 white beans out of the bag, then the chance of that would be one in a million if I had the mixed bag. Do you understand? The real situation is that you have a theory that is making probabilistic predictions about events and, as I have said several times, I have no problem with probabilistic predictions of theories about events.
As briefly as possible:
Firstly, this seems like a step forwards to me. You seem to agree that induction and confirmation are fine 90% of the time. You seem to agree that these ideas work in practice - and are useful - including in some realms of knowledge - such as knowledge relating to which bag is in front of you in the above example. This puts your anti-induction and anti-confirmation statements into a rather different light, IMO.
Confirmation theory has nothing to do with multiverses. It applies equally well for agents in single deterministic universes - such as can be modelled by cellular automata. So, reasoning that depends on the details of multiverse theories is broken from the outset. Imagine evidence for wavefunction collapse was found. Not terribly likely - but it could happen - and you don't want your whole theory of epistemology to break if it does!
Treating uncertainty about theories and uncertainty about events differently is a philosophical mistake. There is absolutely no reason to do it - and it gets people into all kinds of muddles.
We have a beautiful theory of subjective uncertainty that applies equally well to uncertainty about any belief - whether it refers to events, or scientific theories. You can't really tease these categories apart anyway - since many events are contingent upon the truth of scientific theories - e.g. Higgs boson observations. Events are how physical law is known to us.
Instead of using one theory for hypotheses about events and another for hypotheses about universal laws you should - according to Occam's razor - be treating them in the same way - and be using the same underlying general theory that covers all uncertain knowledge - namely the laws of subjective probability.
"Bayesian Confirmation Theory"
http://plato.stanford.edu/entries/epistemology-bayesian/#BayTheBayConThe
Tim - In the example we have been discussing, no confirmation of the actual theory (the one I gave in approximate outline) happens. The actual theory makes probabilistic predictions about events (it also makes non-probabilistic predictions) and tells you how to bet. Getting 20 white beans doesn't make the actual theory any more probable - the probability was a prediction of the theory. Note also that a theory that you are being tricked might recommend that you choose the mixed bag when you get 20 white beans. Lots of theories are consistent with the evidence. What you need to look for is things to refute the possible theories. If you are concerned with confirmation, then the con man wins.
So I am not agreeing that induction and confirmation are fine any percentage of the time (how did you get that 90% figure?). When you consider the actual possible theories of the example, all that is happening is that you have explanatory theories that make predictions, some probabilistic, and that tell you how to bet. The theories are not being induced from evidence and no confirmation takes place.
You haven't explained how we assign objective probabilities to theories that are false in all worlds.
What you're talking about here is a strategy for avoiding bias which Bayesians also use. It is not a fundamental feature of any particular epistemology.
We don't assign objective probabilities, full stop.
I think you are too lost for me :-(
You don't seem to address the idea that multiverse theories are an irrelevance - and that in a single deterministic automaton, things work just the same way.
Indeed, scientists don't even know which (If any) laws of physics are true everywhere, and which depend on the world you are in.
You don't seem to address the idea that we have a nice general theory that covers all kinds of uncertainty, and that no extra theory to deal with uncertainty about scientific hypotheses is needed.
If you don't class hypotheses about events as being "theories", then I think you need to look at:
http://en.wikipedia.org/wiki/Scientific_theory
Also, your challenge doesn't seem to make much sense. The things people assign probabilities to are things they are uncertain about. If you tell me a theory is wrong, it gets assigned a low probability. The interesting cases are ones where we don't yet know the answer - like the clay theory of the origin of life, the orbital inclination theory of glacial cycles - and so on.
Distinguishing between scientific theories and events in the way that you do apparently makes little sense. Events depend on scientific theories. Scientific theories predict events. Every test of a scientific theory is an event. Observing the perihelion precession of Mercury was an event. The observation of the deflection of light by the Sun during an eclipse was an event. If you have probabilities about events which are tests of scientific theories, then you automatically wind up with probabilities about the theories that depend on their outcome.
Basically agents have probabilities about all their beliefs. That is Bayes 101. If an agent claims not to have a probability about some belief, you can usually set up a bet which reveals what they actually think about the subject. Matters of fundamental physics are not different from "what type of beans are in a bag" - in that respect.
Yes, scientific theories predict events. So there is a distinction between events and theories right? If the event is observed to occur, all that happens is that rival theories that do not predict the event are refuted. The theory that predicted the event is not made truer (it already is either true or false). And there are always an infinite number of other theories that predict the same event. So observing the event doesn't allow you to distinguish among those theories.
In the bean bag example you seem to think that the rival theories are "the bag I am holding is mixed" and "the bag I am holding is all white". But what you actually have is a single theory that makes predictions about these two possible events. That theory says you have a one-in-a-million chance of holding the mixed bag.
No, General Relativity being true or false is not like holding a bag of white beans or holding a bag of mixed beans. The latter are events that can and do obtain: They happen. But GR is not true in some universes and false in others. It is either true or false. Everywhere. Furthermore, we accept GR not because it is judged most likely but because it is the best explanation we have.
Popperians claim that we don't need any theory of uncertainty to explain how knowledge grows: uncertainty is irrelevant. That is an interesting claim don't you think? And if you care about the future of humanity, it is a claim that you should take seriously and try to understand.
If you are still confused about my position, why don't you try posting some questions on one of the following lists:
http://groups.yahoo.com/group/Fabric-of-Reality/
http://groups.yahoo.com/group/criticalrationalism/
It might be useful for other Popperians to explain the position - perhaps I am being unclear in some way.
Edit: Just because people might be willing to place bets is no argument that the epistemological point I am making is wrong. What makes those people infallible authorities on epistemology? Also, if I accept a bet from someone that a universal theory is true, would I ever have to pay out?
What about the problem of building pyramids on alpha-centuri by 2012? We can't, but aliens living there could.
More pressingly though, I don't see why this is important. Have we been basing our arguments on an assumption that there are problems we can't solve? Is there any evidence we can solve all problems without access to arbitrarily large amounts of computational power? Something like AIXI can solve pretty much anything, but not relevantly.
How about a neural network that can't learn XOR?
The manner in which explanations are knocked down seems under-specified, if you're not doing Bayesian updating.
Nope, I just don't know what in particular you mean by 'explanation'. I know what the word means in general, but not your specific conception.
Well, that's different from there being no such thing as a probability that a theory is true: your initial assertion implied that the concept wasn't well defined, whereas now you just mean it's irrelevant. Either way, you should probably produce some actual arguments against Jaynes's conception of probability.
Meta: You want to reply directly to a post, not its descendants, or the other person won't get a notification. I only saw your post via the Recent Posts list.
Also, it's no good telling people that they can't use evidence to support their position because it contradicts your theory when the other people haven't been convinced of your theory.
Criticism enables us to see flaws in explanations. What is under-specified about finding a flaw?
In your way, you need to come up with criticisms and also with probabilities associated with those criticisms. Criticisms of real world theories can be involved and complex. Isn't it enough to expose a flaw in an explanatory theory? Must one also go to the trouble of calculating probabilities - a task that is surely fraught with difficulty for any realistic idea of criticism? You're adding a huge amount of auxilliary theory and your evaluation is then also dependent on the truth of all this auxilliary theory.
My conception is the same as the general one.
You don't seem to be actually saying very much then; is LW really short on explanations, in the conventional sense? Explanation seems well evidenced by the last couple of top level posts. Similarly, do we really fail to criticise one another? A large number of the comments seem to be criticisms. If you're essentially criticising us for not having learn rationality 101 - the sort of rationality you learn as a child of 12, arguing against god - then obviously it would be a problem if we didn't bare in mind the stuff. But without providing evidence that we succumb to these faults, it's hard to see what the problem is.
Your other points, however, are substantive. If humans could solve any problem, or it was impossible to design an agent which could learn some but not all things, or confirmation didn't increase subjective plausibility, these would be important claims.
Elliot has informed me that he doesn't think he said: "humans can function as a Turing Machine by laboriously manipulating symbols", except possibly in reply to a very specific question like "Give a short proof that humans have computational universality".
Why do you say "people like Ellliot"? Elliot has his own views on things and shouldn't be conflated with people who you think are like him. It seems to me you don't understand his ideas so wouldn't know what the people who are like him are like.
For interesting definitions of 'can', perhaps. I know some humans who can't create much of anything.
I'm not sure that counts as a 'way of creating knowledge'. 'Conjectures' sounds to me like a black box which would itself contain the relevant bit.
I'd want to know what you mean by 'myth'. It's worked so far, though that only counts as evidence for those of us blinded by the veil of Maya.
Probability is in the mind. Theories are either true or false, and there is such a thing as the probability that a theory is true.
I'm not sure what you mean by that.
This shows the remarks about 'probability' above to be merely a definitional dispute. Probability describes uncertainty, and you admit that we have uncertain knowledge.
True that!
Welcome to Less Wrong
ETA: Reminder that we have a rough community norm against downvoting first posts when they seem to be in good faith.
All human beings create knowledge - masses of it. Certain ideas can and do impair a person's creativity, but it is always possible to learn and to change one's ideas.
It's not just conjectures, it's "conjectures and refutations". Knowledge is created by advancing conjectural explanations to solve a problem and then criticizing those conjectures in an attempt to refute them. The goal is to find a conjecture that can withstand all criticisms we can think of and to refute all rival conjectures.
No, it never worked. Not a bit. That's what I mean by myth.
Theories are objective. Whether you think a theory is true or false has no bearing on whether it is in fact true or false. Moreover, how do you assign a probability to a complex real-world theory like, say, multiversal quantum mechanics? What counts is whether the theory has stood up to criticism as an explanation to a problem or set of problems. If it has, who cares about how probable you think it is? It's not the probability that you should care about, it's the explanation.
Above all else, we should try to find explanations for things; explanations are the most important kind of knowledge.
Knowledge is always uncertain, yes, but it is impossible to objectively quantify the uncertainty. Put another way, you cannot know what you do not yet know. Theories can be wrong in all sorts of ways but you have no way of doing in advance how or if a theory will go wrong. It's not a definitional dispute.
OK, we agree on that!
Probability is subjectively objective. All conjectures/models are wrong, but some are useful to the extent that they successfully constrain expected experience.