No, you do not get to publicly demand an in-depth discussion of the philosophy of induction from a specific, small group of people. You can raise the topic in a place where you know they hang out and gesture in their direction. But what you're doing here is trying to create a social obligation to read ten thousand words of your writing. With your trademark in capital letters in every other sentence. And to write a few thousand words in response. From my outside perspective, engaging in this way looks like it would be a massive unproductive time sink.
I have no reason for me to believe that Curi is among the people who's a really good philosopher.
Popper might have said useful things given his time but he's dead. I won't read from Popper about what he thinks about the development of the No Free Lunch theorem and ideas that came up after he died.
Barry Smith would be an example of a person that I like and where it's worth to spend more time reading more of his work. His work of applied ontology actually matters for real world decision making and knowledge modeling.
Reading more from Judea Pearl (who by the way supervised Ilya Shpitser's Phd) is also on my long-term philosophic reading list.
Your sockpuppet: "There is a shortage of good philosophers."
Me: "Here is a good philosophy book."
You: "That's not philosophy."
Also you: "How is Ayn Rand so right about everything."
Also you: "I don't like mainstream stuff."
Also you: "Have you heard that I exchanged some correspondence with DAVID DEUTSCH!?"
Also you: "What if you are, hypothetically, wrong? What if you are, hypothetically, wrong? What if you are, hypothetically, wrong?" x1000
Part of rationality is properly dealing with people-as-they-are. What your approach to spreading your good word among people-as-they-are led to is them laughing at you.
It is possible that they are laughing at you because they are some combination of stupid and insane. But then it's on you to first issue a patch into their brain that will be accepted, such that they can parse your proselytizing, before proceeding to proselytize.
This is what Yudkowsky sort of tried to do.
How you read to me is a smart young adult who has the same problem Yudkowsky has (although Yudkowsky is not so young anymore) -- someone who has been the smartest person in the room for too long in their intellectual development, and lacks the sense of scale and context to see where he stands in the larger intellectual community.
I hunted around your website until I found an actual summary of Popper's thinking in straightforward language.
Until I found that I had not seen you actually provide clear text like this, and I wanted to exhort you to write an entire sequence in language with that flavor: clean and clear and lacking in citation. The sequence should be about what "induction" is, and why you think other people believed something about it (even if not perhaps by that old fashioned name), and why you think those beliefs are connected to reliably predictable failures...
I think there are two big facts here.
ONE: You're posting over and over again with lots of links to your websites, which are places you offer consulting services, and so it kinda seems like you're maybe just a weirdly inefficient spammer for bespoke nerd consulting.
This makes almost everything you post here seem like it might all just be an excuse for you to make dramatic noise in the hopes of the noise leading somehow to getting eyeballs on your website, and then, I don't even know... consulting gigs or something?
This interpretation would seem less salient if you were trying to add value here in some sort of pro-social way, but you don't seem to be doing that so... so basically everything you write here I take with a giant grain of salt.
My hope is that you are just missing some basic insight, and once you learn why you seem to be half-malicious you will stop defecting in the communication game and become valuable :-)
TWO: From what you write here at an object level, you don't even seem to have a clear and succinct understanding of any of the things that have been called a "problem of induction" over the years, which is your major beef, from what I can see.
You've mentioned...
Fundamentally, the thing I offer you is respect, the more effective pursuit of truth, and a chance to help our species not go extinct, all of which I imagine you want (or think you want) because out of all the places on the Internet you are here.
If I'm wrong and you do NOT want respect, truth, and a slightly increased chance of long term survival, please let me know!
One of my real puzzles here is that I find it hard to impute a coherent, effective, transparent, and egosyntonic set of goals to you here and now.
Personally, I'd be selfishly just as happy if, instead of writing all new material, you just stopped posting and commenting here, and stopped sending "public letters" to MIRI (an organization I've donated to because I think they have limited resources and are doing good work).
I don't dislike books in general. I don't dislike commercialism in general. I dislike your drama, and your shallow citation filled posts showing up in this particular venue.
Basically I think you are sort of polluting this space with low quality communication acts, and that is probably my central beef with you here and now. There's lots of ways to fix this... you writing better stuff... you writi...
At one point in that discussion curi says the following, about me:
and then he was hostile to concepts like keeping track of what points he hadn't answered or talking about discussion methodology itself. he was also, like many people, hostile to using references.
I'd just like to say, for the record, that that is not an accurate characterization of my opinion or attitudes, and I do not believe it is an accurate characterization of my words either. What is true is that we'd been talking about various Popperish things, and then curi switched to only wantin...
Disclosure: I didn't read Popper in original (nor do I plan to in the nearest future; sorry, other priorities), I just had many people mention his name to me in the past, usually right before they shot themselves in their own foot. It typically goes like this:
There is a scientific consensus (or at least current best guess) about X. There is a young smart person with their pet theory Y. As the first step, they invoke Popper to say that science didn't actually prove X, because it is not the job of science to actually prove things; science can merely falsify ...
Critical Rationalism (CR)
CR is an epistemology developed by 20th century philosopher Karl Popper. An epistemology is a philosophical framework to guide effective thinking, learning, and evaluating ideas. Epistemology says what reason is and how it works (except the epistemologies which reject reason, which we’ll ignore). Epistemology is the most important intellectual field, because reason is used in every other field. How do you figure out which ideas are good in politics, physics, poetry or psychology? You use the methods of reason! Most people don’t have a very complete conscious understanding of their epistemology (how they think reason works), and haven’t studied the matter, which leaves them at a large intellectual disadvantage.
Epistemology offers methods, not answers. It doesn’t tell you which theory of gravity is true, it tells you how to productively think and argue about gravity. It doesn’t give you a fish or tell you how to catch fish, instead it tells you how to evaluate a debate over fishing techniques. Epistemology is about the correct methods of arguing, truth-seeking, deciding which ideas make sense, etc. Epistemology tells you how to handle disagreements (which are common to every field).
CR is general purpose: it applies in all situations and with all types of ideas. It deals with arguments, explanations, emotions, aesthetics – anything – not just science, observation, data and prediction. CR can even evaluate itself.
Fallibility
CR is fallibilist rather than authoritarian or skeptical. Fallibility means people are capable of making mistakes and it’s impossible to get a 100% guarantee that any idea is true (not a mistake). And mistakes are common so we shouldn’t try to ignore fallibility (it’s not a rare edge case). It’s also impossible to get a 99% or even 1% guarantee that an idea is true. Some mistakes are unpredictable because they involve issues that no one has thought of yet.
There are decisive logical arguments against attempts at infallibility (including probabilistic infallibility).
Attempts to dispute fallibilism are refuted by a regress argument. You make a claim. I ask how you guarantee the claim is correct (even a 1% guarantee). You make a second claim which gives some argument to guarantee the correctness of the first claim (probabilistically or not). No matter what you say, I ask how you guarantee the second claim is correct. So you make a third claim to defend the second claim. No matter what you say, I ask how you guarantee the correctness of the third claim. If you make a fourth claim, I ask you to defend that one. And so on. I can repeat this pattern infinitely. This is an old argument which no one has ever found a way around.
CR’s response to this is to accept our fallibility and figure out how to deal with it. But that’s not what most philosophers have done since Aristotle.
Most philosophers think knowledge is justified, true belief, and that they need a guarantee of truth to have knowledge. So they have to either get around fallibility or accept that we don’t know anything (skepticism). Most people find skepticism unacceptable because we do know things – e.g. how to build working computers and space shuttles. But there’s no way around fallibility, so philosophers have been deeply confused, come up with dumb ideas, and given philosophy a bad name.
So philosophers have faced a problem: fallibility seems to be indisputable, but also seems to lead to skepticism. The way out is to check your premises. CR solves this problem with a theory of fallible knowledge. You don’t need a guarantee (or probability) to have knowledge. The problem was due to the incorrect “justified, true belief” theory of knowledge and the perspective behind it.
Justification is the Major Error
The standard perspective is: after we come up with an idea, we should justify it. We don’t want bad ideas, so we try to argue for the idea to show it’s good. We try to prove it, or approximate proof in some lesser way. A new idea starts with no status (it’s a mere guess, hypothesis, speculation), and can become knowledge after being justified enough.
Justification is always due to some thing providing the justification – be it a person, a religious book, or an argument. This is fundamentally authoritarian – it looks for things with authority to provide justification. Ironically, it’s commonly the authority of reasoned argument that’s appealed to for justification. Which arguments have the authority to provide justification? That status has to be granted by some prior source of justification, which leads to another regress.
Fallible Knowledge
CR says we don’t have to justify our beliefs, instead we should use critical thinking to correct our mistakes. Rather than seeking justification, we should seek our errors so we can fix them.
When a new idea is proposed, don’t ask “How do you know it?” or demand proof or justification. Instead, consider if you see anything wrong with it. If you see nothing wrong with it, then it’s a good idea (knowledge). Knowledge is always tentative – we may learn something new and change our mind in the future – but that doesn’t prevent it from being useful and effective (e.g. building space shuttles that successfully reach the moon). You don’t need justification or perfection to reach the moon, you just need to fix errors with your designs until they’re good enough to work. This approach avoids the regress problems and is compatible with fallibility.
The standard view said, “We may make mistakes. What should we do about that? Find a way to justify an idea as not being a mistake.” But that’s impossible.
CR says, “We may make mistakes. What should we do about that? Look for our mistakes and try to fix them. We may make mistakes while trying to correct our mistakes, so this is an endless process. But the more we fix mistakes, the more progress we’ll make, and the better our ideas will be.”
Guesses and Criticism
Our ideas are always fallible, tentative guesses with no special authority, status or justification. We learn by brainstorming guesses and using critical arguments to reject bad guesses. (This process is literally evolution, which is the only known answer to the very hard problem of how knowledge can be created.)
How do you know which critical arguments are correct? Wrong question. You just guess it, and the critical arguments themselves are open to criticism. What if you miss something? Then you’ll be mistaken, and hopefully figure it out later. You must accept your fallibility, perpetually work to find and correct errors, and still be aware that you are making some mistakes without realizing it. You can get clues about some important, relevant mistakes because problems come up in your life (indicating to direct more attention there and try to improve something).
CR recommends making bold, clear guesses which are easier to criticize, rather than hedging a lot to make criticism difficult. We learn more by facilitating criticism instead of trying to avoid it.
Science and Evidence
CR pays extra attention to science. First, CR offers a theory of what science is: a scientific idea is one which could be contradicted by observation because it makes some empirical claim about reality.
Second, CR explains the role of evidence in science: evidence is used to refute incorrect hypotheses which are contradicted by observation. Evidence is not used to support hypotheses. There is evidence against but no evidence for. Evidence is either compatible with a hypothesis, or not, and no amount of compatible evidence can justify a hypothesis because there are infinitely many contradictory hypotheses which are also compatible with the same data.
These two points are where CR has so far had the largest influence on mainstream thinking. Many people now see science as being about empirical claims which we then try to refute with evidence. (Parts of this are now taken for granted by many people who don’t realize they’re fairly new ideas.)
CR also explains that observation is selective and interpreted. We first need ideas to decide what to look at and which aspects of it to pay attention to. If someone asks you to “observe”, you have to ask them what to observe (unless you can guess what they mean from context). The world has more places to look, with more complexity, than we can keep track of. So we have to do a targeted search according to some guesses about what might be productive to investigate. In particular, we often look for evidence that would contradict (not support) our hypotheses in order to test them and try to correct our errors.
We also need to interpret our evidence. We don’t see puppies, we see photons which we interpret as meaning there is a puppy over there. This interpretation is fallible – sometimes people are confused by mirrors, mirages (where blue light from the sky goes through the hotter air near the ground then up to your eyes, so you see blue below you and think you found an oasis), fog (you can mistakenly interpret whether you did or didn’t see a person in the fog), etc.
Seems like these "critical arguments" do a lot of heavy lifting.
Suppose you make a critical argument against my hypothesis, and the arguments feels smart to you, but silly to me. I make a counter-argument, which to me feels like it completely demolished your position, but in your opinion it just shows how stupid I am. Suppose the following rounds of arguments are similarly fruitless.
Now what?
In a situation between a smart scientist who happens to be right, and a crackpot that refuses admitting the smallest mistake, how would you distinguish which... (read more)