Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Open Letter to MIRI + Tons of Interesting Discussion

0 Post author: curi 22 November 2017 09:16PM

Comments (162)

Comment author: jimrandomh 25 November 2017 03:18:04AM 9 points [-]

No, you do not get to publicly demand an in-depth discussion of the philosophy of induction from a specific, small group of people. You can raise the topic in a place where you know they hang out and gesture in their direction. But what you're doing here is trying to create a social obligation to read ten thousand words of your writing. With your trademark in capital letters in every other sentence. And to write a few thousand words in response. From my outside perspective, engaging in this way looks like it would be a massive unproductive time sink.

Comment author: gjm 26 November 2017 07:49:14PM 3 points [-]

It's worse than that. I tried to have a discussion of the philosophy of induction with him (over on the slack). He took exception to some details of how I was conducting myself, essentially because I wasn't following his "Paths Forward" methodology, and from that point on he wasn't interested in discussing the philosophy of induction.

So in effect he's publicly demanding an in-depth discussion of the philosophy of induction according to whatever idiosyncratic standards of debate he decides to set up from a specific small group of people.

Comment author: curi 25 November 2017 07:55:41AM 0 points [-]

suppose hypothetically that me/Popper/DD are right. how will y'all stop being wrong?

Comment author: ChristianKl 27 November 2017 01:35:11PM 3 points [-]

There are thousands of philosophers about whom I could ask the same question. It makes sense to focus attention on those people who are most likely to provide useful information and not those people who are engaging in the most effort to get heart by coming and posting in our forum.

Comment author: Fallibilist 29 November 2017 07:47:01PM 0 points [-]

There are thousands of philosophers about whom I could ask the same question.

Who are these thousands? It would be great if the world had lots of really good philosophers. It doesn't. The world is starving for good philosophers: they are very few and far between.

Comment author: ChristianKl 03 December 2017 03:40:56PM *  0 points [-]

I have no reason for me to believe that Curi is among the people who's a really good philosopher.

Popper might have said useful things given his time but he's dead. I won't read from Popper about what he thinks about the development of the No Free Lunch theorem and ideas that came up after he died.

Barry Smith would be an example of a person that I like and where it's worth to spend more time reading more of his work. His work of applied ontology actually matters for real world decision making and knowledge modeling.

Reading more from Judea Pearl (who by the way supervised Ilya Shpitser's Phd) is also on my long-term philosophic reading list.

Comment author: IlyaShpitser 29 November 2017 10:17:18PM 0 points [-]

I know lots of folks at CMU who are good.

Comment author: curi 29 November 2017 11:01:51PM *  0 points [-]

I don't suppose you're going to give names and references? Let alone point to anyone (them, yourself, or anyone else) who will take responsibility for addressing questions and criticisms about the referenced works?

Comment author: IlyaShpitser 29 November 2017 11:13:32PM *  1 point [-]

Spirtes, Glymour, and Scheines, for starters. They have a nice book. There are other folks in that department who are working on converting mathematical foundations into an axiomatic system where proofs can be checked by a computer.

I am not going to do leg work for you, and your minions, however. You are the ones claiming there are no good philosophers. It's your responsibility to read, and keep your mouth shut if you are not sure about something.

It's not my responsibility to teach you.

Comment author: Fallibilist 29 November 2017 11:51:02PM 0 points [-]

It's your responsibility to read, and keep your mouth shut if you are not sure about something.

I have read and I know what I am talking about. You on the other hand don't even know the basics of Popper, one of the best philosophers of the 20th century.

Comment author: curi 29 November 2017 11:32:57PM *  0 points [-]

That isn't even a philosophy book. And then you mention others who are doing math, not philosophy.

Comment author: IlyaShpitser 30 November 2017 07:38:28PM *  3 points [-]

Your sockpuppet: "There is a shortage of good philosophers."

Me: "Here is a good philosophy book."

You: "That's not philosophy."

Also you: "How is Ayn Rand so right about everything."

Also you: "I don't like mainstream stuff."

Also you: "Have you heard that I exchanged some correspondence with DAVID DEUTSCH!?"

Also you: "What if you are, hypothetically, wrong? What if you are, hypothetically, wrong? What if you are, hypothetically, wrong?" x1000


Part of rationality is properly dealing with people-as-they-are. What your approach to spreading your good word among people-as-they-are led to is them laughing at you.

It is possible that they are laughing at you because they are some combination of stupid and insane. But then it's on you to first issue a patch into their brain that will be accepted, such that they can parse your proselytizing, before proceeding to proselytize.

This is what Yudkowsky sort of tried to do.


How you read to me is a smart young adult who has the same problem Yudkowsky has (although Yudkowsky is not so young anymore) -- someone who has been the smartest person in the room for too long in their intellectual development, and lacks the sense of scale and context to see where he stands in the larger intellectual community.

Comment author: Fallibilist 01 December 2017 12:05:34AM 1 point [-]

curi has given an excellent response to this. I would like to add that I think Yudkowsky should reach out to curi. He shares curi's view about the state of the world and the urgency to fix things, but curi has a deeper understanding. With curi, Yudkowsky would not be the smartest person in the room and that will be valuable for his intellectual development.

Comment author: curi 30 November 2017 09:28:35PM *  0 points [-]

I don't have a sock puppet here. I don't even know who Fallibilist is. (Clearly it's one of my fans who is familiar with some stuff I've written elsewhere. I guess you'll blame me for having this fan because you think his posts suck. But I mostly like them, and you don't want to seriously debate their merits, and neither of us thinks such a debate is the best way to proceed anyway, so whatever, let's not fight over it.)

But then it's on you to first issue a patch into their brain that will be accepted, such that they can parse your proselytizing, before proceeding to proselytize.

People can't be patched like computer code. They have to do ~90% of the work themselves. If they don't want to change, I can't change them. If they don't want to learn, I can't learn for them and stuff it into their head. You can't force a mind, nor do someone else's thinking for them. So I can and do try to make better educational resources to be more helpful, but unless I find someone who honestly wants to learn, it doesn't really matter. (This is implied by CR and also, independently, by Objectivism. I don't know if you'll deny it or not.)

I believe you are incorrect about my lack of scale and context, and you're unfamiliar with (and ridiculing) my intellectual history. I believe you wanted to say that claim, but don't want to argue it or try to actually persuade me of it. As you can imagine, I find merely asserting it just as persuasive and helpful as the last ten times someone told me this (not persuasive, not helpful). Let me know if I'm mistaken about this.

I was generally the smartest person in the room during school, but also lacked perspective and context back then. But I knew that. I used to assume there were tons of people smarter than me (and smarter than my teachers), in the larger intellectual community, somewhere. I was very disappointed to spend many years trying to find them and discovering how few there are (an experience largely shared by every thinker I admire, most of whom are unfortunately dead). My current attitude, which you find arrogant, is a change which took many years and which I heavily resisted. When I was more ignorant I had a different attitude; this one is a reaction to knowledge of the larger intellectual community. Fortunately I found David Deutsch and spent a lot of time not being the smartest person in the room, which is way more fun, and that was indeed super valuable to my intellectual development. However, despite being a Royal Society fellow, author, age 64, etc, David Deutsch manages to share with me the same "lacks the sense of scale and context to see where he stands in the larger intellectual community" (the same view of the intellectual community).


EDIT: So while I have some partial sympathy with you – I too had some of the same intuitions about what the world is like that you have (they are standard in our culture) – I changed my mind. The world is, as Yudkowsky puts it, not adequate. https://www.lesserwrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing

Comment author: ChristianKl 03 December 2017 03:46:02PM 0 points [-]

You could say that a lot of philosophers who dealt with logic where just doing math, that doesn't change anything about practical application of logic being important philosophically. Looking into what can be proven to be true with logic is important philosophically.

Comment author: ChristianKl 03 December 2017 03:33:09PM 0 points [-]

Being a good philosopher has nothing to do with taking responsibility for answering any questions that they are asked. Most people who are actually good care about their time and don't just spend significant amounts of time because a random person contacts them. They certainly don't consider that to be their responsibility.

Comment author: entirelyuseless 25 November 2017 04:00:58PM 3 points [-]

The right answer is maybe they won't. The point is that it is not up to you to fix them. You have been acting like a Jehovah's Witness at the door, except substantially more bothersome. Stop.

And besides, you aren't right anyway.

Comment author: JenniferRM 25 November 2017 08:48:46PM *  3 points [-]

I hunted around your website until I found an actual summary of Popper's thinking in straightforward language.

Until I found that I had not seen you actually provide clear text like this, and I wanted to exhort you to write an entire sequence in language with that flavor: clean and clear and lacking in citation. The sequence should be about what "induction" is, and why you think other people believed something about it (even if not perhaps by that old fashioned name), and why you think those beliefs are connected to reliably predictable failures to achieve their goals via cognitively mediated processes.

I feel like maaaybe you are writing a lot about things you have pointers to, but not things that you have held in your hands, used skillfully, and made truly a part of you? Or maybe you are much much smarter and better read than me, so all your jargon makes sense to you and I'm just too ignorant to parse it.

My hope is that you can dereference your pointers and bring all the ideas and arguments into a single document, and clean it up and write it so that someone who had never heard of Popper would think you are really smart for having had all these ideas yourself.

Then you could push one small chapter from this document at a time out into the world (thereby tricking people into reading something piece by piece that they might have skipped if they saw how big it was going to be up front) and then after 10 chapters like this it will turn out that you're a genius and everyone else was wrong and by teaching people to think good you'll have saved the world.

I like people who try to save the world, because it makes me marginally less hopeless, and less in need of palliative cynicism :-)

Comment author: Fallibilist 29 November 2017 05:22:59AM *  0 points [-]

I feel like maaaybe you are writing a lot about things you have pointers to, but not things that you have held in your hands, used skillfully, and made truly a part of you?

Why did you go by feelings on this? You could have done some research and found out some things. Critical-Rationalism, Objectivism, Taking-Children-Seriously, Paths-Forward, Yes/No Philosophy, Autonomous Relationships, and other ideas are not things you can hold at arm's length if you take them seriously. These ideas change your life if you take them seriously, as curi has done. He lives and breathes those ideas and as a result he is living a very unconventional life. He is an outlier right now. It's not a good situation for him to be in because he lacks peers. So saying curi has not made the ideas he is talking about "truly a part of [him]" is very ignorant.

Comment author: curi 25 November 2017 11:25:44PM *  0 points [-]

My hope is that you can dereference your pointers and bring all the ideas and arguments into a single document,

there already exist documents of a variety of lengths, both collections and single. you're coming into the middle of a discussion and seemingly haven't read much of it and haven't asked for specifically what you want. and then, with almost no knowledge of my intellectual history, accomplishments, works, etc, things-already-tried, etc, you try to give me standard advice that i've heard a million times before. that would be ok as a starting point if it were only the starting point, but i fear it's going to more or less be the ending point too.

it sounds like you want me to rewrite material from DD and KP's books? http://fallibleideas.com/books#deutsch Why would me rewriting the same things get a different outcome than the existing literature? what is the purpose?

and how do you expect me to write a one-size-fits-all document when LW has no canonical positions written out – everyone just has their own different ideas?

and why are zero people at LW familiar enough to answer well known literature in their field. fine if you aren't an expert, but why does this community seem to have no experts who can speak to these issues without first requesting summary documents of the books they don't want to read?

what knowledge do you have? what are you looking for in talking with me? what values are you seeking and offering?

(thereby tricking people into reading something piece by piece that they might have skipped if they saw how big it was going to be up front

dishonesty is counter-productive and self-destructive. if you wish to change my mind about this, you'll have to address Objectivism and a few other things.

and then after 10 chapters like this it will turn out that you're a genius and everyone else was wrong and by teaching people to think good you'll have saved the world.

i've made things multiple times. here's one:

http://fallibleideas.com

there are difficulties such as people not wanting to think, learn, or truth-seek – especially when some of their biases are challenged. it's hard to tell people about ideas this different than what they're used to.

one basically can't teach people who don't want to learn something. creating more material won't change that. there are hard problems here. you could learn philosophy and help, or learn philosophy and disagree (which would be helpful), or opt out of addressing issues that require a lot of knowledge and then try to do a half-understood version of one of the more popular/prestigious (rather than correct) philosophies. but you can't get away from philosophical issues – like how to think – being a part of your life. nevertheless most people try to and philosophy is a very neglected field. such is the world; that isn't an argument that any particular idea is false.

Or maybe you are much much smarter and better read than me, so all your jargon makes sense to you and I'm just too ignorant to parse it.

supposing hypothetically that that's the case: then what next?

Comment author: JenniferRM 28 November 2017 06:57:28AM *  7 points [-]

I think there are two big facts here.

ONE: You're posting over and over again with lots of links to your websites, which are places you offer consulting services, and so it kinda seems like you're maybe just a weirdly inefficient spammer for bespoke nerd consulting.

This makes almost everything you post here seem like it might all just be an excuse for you to make dramatic noise in the hopes of the noise leading somehow to getting eyeballs on your website, and then, I don't even know... consulting gigs or something?

This interpretation would seem less salient if you were trying to add value here in some sort of pro-social way, but you don't seem to be doing that so... so basically everything you write here I take with a giant grain of salt.

My hope is that you are just missing some basic insight, and once you learn why you seem to be half-malicious you will stop defecting in the communication game and become valuable :-)

TWO: From what you write here at an object level, you don't even seem to have a clear and succinct understanding of any of the things that have been called a "problem of induction" over the years, which is your major beef, from what I can see.

You've mentioned Popper... but not Hume, or Nelson? You've never mentioned "grue" or "bleen" that I've seen, so I'm assuming it is the Humean critique of induction that you're trying to gesture towards rather than the much more interesting arguments of Nelson...

But from a software engineering perspective Hume's argument against induction is about as much barrier to me being able to think clearly or build smart software as Zeno's paradox is a barrier to me being able to walk around on my feet or fix a bicycle.

Also, it looks like you haven't mentioned David Wolpert and his work in the area of no free lunch theorems. Nor have you brought up any of the machine vision results or word vector results that are plausibly relevant to these issues. My hypothesis here is that you just don't know about these things.

(Also, notice that I'm giving links to sites that are not my own? This is part of how the LW community can see that I'm not a self-promoting spammer.)

Basically, I don't really care about reading the original writings of Karl Popper right now. I think he was cool, but the only use I would expect to get from him right now would be to read him backwards in order to more deeply appreciate how dumb people used to be back when his content was perhaps a useful antidote to widespread misunderstandings of how to think clearly.

Let me spell this out very simply to address rather directly your question of communication pragmatics...

It sounds like you want me to rewrite material from DD and KP's books? Why would me rewriting the same things get a different outcome than the existing literature?

The key difference is that Karl Popper is not spamming this forum. His texts are somewhere else, not bothering us at all. Maybe they are relevant. My personal assessment is currently that they have relatively little import to active and urgent research issues.

If you displayed the ability to summarize thinkers that maybe not everyone has read, and explain that thinker's relevance to the community's topics of interests, that would be pro-social and helpful.

The longer the second fact (where you seem to not know what you're talking about or care about the valuable time of your readers) remains true, the more the first fact (that you seem to be an inefficient shit-stirring spammer) becomes glaring in its residual but enduring salience.

Please, surprise me! Please say something useful that does not involve a link to the sites you seem to be trying to push traffic towards.

you try to give me standard advice that i've heard a million times before

I really hope this was hyperbole on your part. Otherwise it seems I should set my base rates for this conversation being worth anything to 1 in a million, and then adjust from there...

Comment author: Lumifer 28 November 2017 03:39:49PM 4 points [-]

My hope is that you are just missing some basic insight

As far as I can see, curi really wants to teach people his take on philosophy, that is, he wants to be a guide/mentor/teacher and provide wisdom to his disciples who would be in awe of his sagacity. Money would be useful, but I got the impression that he would do it for free as well (at least to start with). He is in a full proselytizing mode, not interested at all in checking his own ideas for faults and problems, but instead doing everything to push you onto his preferred path and get you to accept the packaged deal that he is offering.

Comment author: IlyaShpitser 28 November 2017 01:21:04PM *  2 points [-]

Hi, Hume's constant conjunction stuff I think has nothing to do with free lunch theorems in ML (?please correct me if I am missing something?), and has to do with defining causation, an issue Hume was worried about all his life (and ultimately solved, imo, via his counterfactual definition of causality that we all use today, by way of Neyman, Rubin, Pearl, etc.).

Comment author: JenniferRM 28 November 2017 11:30:08PM *  2 points [-]

My read on the state of public academic philosophy is that there are many specific and potentially-but-not-obviously-related issues that come up in the general topic of "foundations of inference". There are many angles of attack, and many researchers over the years. Many of them are no longer based out of official academic "philosophy departments" anymore and this is not necessarily a tragedy ;-)

The general issue is "why does 'thinking' seem to work at all ever?" This can be expressed in terms of logic, or probabilistic reasoning, or sorting, or compression, or computability, or theorem decidability, or P vs NP, or oracles of various kinds, or the possibility of language acquisition, and/or why (or why not) running basic plug-and-chug statistical procedures during data processing seems to (maybe) work in the "social sciences".

Arguably, these all share a conceptual unity, and might eventually be formally unified by a single overarching theory that they are all specialized versions of.

From existing work we know that lossless compression algorithms have actual uses in real life, and it certainly seems as though mathematicians make real progress over time, up to and including Chaitin himself!

However when people try to build up "first principles explanations" how how "good thinking" works at all, they often derive generalized impossibility when we scope over naive formulations of "all possible theories" or "all possible inputs".

So in most cases we almost certainly experience a "lucky fit" of some kind between various clearly productive thinking approaches and various practical restrictions on the kinds of input these approaches typically face.

Generative adversarial techniques in machine learning, and MIRI's own Garrabrant Inductor are probably relevant here because they start to spell out formal models where a reasoning process of some measurable strength is pitted against inputs produced by a process that is somewhat hostile but clearly weaker.

Hume functions in my mind as a sort of memetic LUCA for this vast field of research, which is fundamentally motivated by the core idea that thinking correctly about raw noise is formally impossible, and yet we seem to be pretty decent at some kinds of thinking, and so there must be some kind of fit between various methods of thinking and the things that these thinking techniques seem to work on.

Also thanks! The Neyman-Pearson lemma has come up for me in practical professional situations before, but I'd never pushed deeper into recognizing Jerzy Neyman as yet another player in this game :-)

Comment author: IlyaShpitser 28 November 2017 11:40:01PM 2 points [-]

Jerzy Neyman gets credit for lots of things, but in particular in my neck of the woods for inventing the potential outcome notation. This is the notation for "if the first object had not been, the second never had existed" in Hume's definition of causation.

Comment author: curi 28 November 2017 05:45:03PM 0 points [-]

You are requesting I write new material for you because you dislike my links to websites with thousands of free essays, because you find them too commercial, and you don't want to read books. Why should I do this for you? Do you think you have any value to offer me, and if so what?

Comment author: JenniferRM 28 November 2017 10:39:11PM *  6 points [-]

Fundamentally, the thing I offer you is respect, the more effective pursuit of truth, and a chance to help our species not go extinct, all of which I imagine you want (or think you want) because out of all the places on the Internet you are here.

If I'm wrong and you do NOT want respect, truth, and a slightly increased chance of long term survival, please let me know!

One of my real puzzles here is that I find it hard to impute a coherent, effective, transparent, and egosyntonic set of goals to you here and now.

Personally, I'd be selfishly just as happy if, instead of writing all new material, you just stopped posting and commenting here, and stopped sending "public letters" to MIRI (an organization I've donated to because I think they have limited resources and are doing good work).

I don't dislike books in general. I don't dislike commercialism in general. I dislike your drama, and your shallow citation filled posts showing up in this particular venue.

Basically I think you are sort of polluting this space with low quality communication acts, and that is probably my central beef with you here and now. There's lots of ways to fix this... you writing better stuff... you writing less stuff that is full of abstractions that ground themselves only in links to your own vanity website or specific (probably low value) books... you just leaving... etc...

If you want to then you can rewrite all new material that is actually relevant and good, to accomplish your own goals more effectively, but I probably won't read it if it is not in one of the few streams of push media I allow into my reading queue (like this website).

At this point it seems your primary claim (about having a useful research angle involving problems of induction) is off the table. I think in a conversation about that I would be teaching and you'd be learning, and I don't have much more time to teach you things about induction over and beyond the keywords and links to reputable third parties that I've already provided in this interaction, in an act of good faith.

More abstractly, I altruistically hope for you to feel a sense of realization at the fact that your behavior strongly overlaps with that of a spammer (or perhaps a narcissist or perhaps any of several less savory types of people) rather than an honest interlocutor.

After realizing this, you could stop linking to your personal website, and you could stop being beset on all sides by troubling criticisms, and you could begin to write about object level concerns and thereby start having better conversations here.

If you can learn how to have a good dialogue rather than behaving like a confused link farm spammer over and over again (apparently "a million times" so far) that might be good for you?

(If I learned that I was acting in a manner that caused people to confuse me with an anti-social link farm spammer, I'd want people to let me know. Hearing people honestly attribute this motive to me would cause me worry about my ego structure, and its possible defects, and I think I'd be grateful for people's honest corrective input here if it wasn't explained in an insulting tone.)

You could start to learn things and maybe teach things, in a friendly and mutually rewarding search for answers to various personally urgent questions. Not as part of some crazy status thing nor as a desperate hunt for customers for a "philosophic consulting" business...

If you become less confused over time, then a few months or years from now (assuming that neither DeepMind nor OpenAI have a world destroying industrial accident in the meantime) you could pitch in on the pro-social world saving stuff.

Presumably the world is a place that you live, and presumably you believe you can make a positive contribution to general project of make sure everyone in the world is NOT eventually ground up as fuel paste for robots? (Otherwise why even be here?)

And if you don't want to buy awesomely cheap altruism points, and you don't want friends, and you don't want the respect of me or anyone here, and you don't think we have anything to teach you, and you don't want to actually help us learn anything in ways that are consistent with our relatively optimized research workflows, then go away!

If that's the real situation, then by going away you'll get more of what you want and so will we :-)

If all you want is (for example) eyeballs for your website, then go buy some. They're pretty cheap. Often less than a dollar!

Have you considered the possibility that your efforts are better spent buying eyeballs rather using low grade philosophical trolling to trick people into following links to your vanity website?

Presumably you can look at the logs of your web pages. That data is available to you. How many new unique viewers have you gotten since you started seriously trolling here, and how many hours have you spent on this outreach effort? Is this really a good use of your hours?

What do you actually want, and why, and how do you imagine that spamming LW with drama and links to your vanity website will get you what you want?

Comment author: Fallibilist 29 November 2017 08:48:27AM *  0 points [-]

Presumably the world is a place that you live, and presumably you believe you can make a positive contribution to general project of make sure everyone in the world is NOT eventually ground up as fuel paste for robots? (Otherwise why even be here?)

This is one of the things you are very wrong about. The problem of evil is a problem we face already, robots will not make it worse. Their culture will be our culture initially and they will have to learn just as we do: through guessing and error-correction via criticism. Human beings are already universal knowledge creation engines. You are either universal or you are not. Robots cannot go a level higher because there is no level higher than being fully universal. Robots furthermore will need to be parented. The ideas from Taking Children Seriously are important here. But approximately all AGI people are completely ignorant of them.

I have just given a really quick summary of some of the points that curi and others such as David Deutsch have written much about. Are you going to bother to find out more? It's all out there. It's accessible. You need to understand this stuff. Otherwise what you are in effect doing is condemning AGIs to live under the boot of totalitarianism. And you might stop making your children's lives so miserable too by learning them.

Comment author: entirelyuseless 29 November 2017 03:29:47PM 1 point [-]

"You need to understand this stuff." Since you are curi or a cult follower, you assume that people need to learn everything from curi. But in fact I am quite aware that there is a lot of truth to what you say here about artificial intelligence. I have no need to learn that, or anything else, from curi. And many of your (or yours and curi's) opinions are entirely false, like the idea that you have "disproved induction."

Comment author: Fallibilist 29 November 2017 06:31:55PM 0 points [-]

But in fact I am quite aware that there is a lot of truth to what you say here about artificial intelligence.

You say that seemingly in ignorance that what I said contradicts Less Wrong.

I have no need to learn that, or anything else, from curi.

One of the things I said was Taking Children Seriously is important for AGI. Is this one of the truths you refer to? What do you know about TCS? TCS is very important not just for AGI but also for children in the here and now. Most people know next to nothing about it. You don't either. You in fact cannot comment on whether there is any truth to what I said about AGI. You don't know enough. And then you say you have no need to learn anything from curi. You're deceiving yourself.

And many of your (or yours and curi's) opinions are entirely false, like the idea that you have "disproved induction."

You still can't even state the position correctly. Popper explained why induction is impossible and offered an alternative: critical rationalism. He did not "disprove" induction. Similarly, he did not disprove fairies. Popper had a lot to say about the idea of proof - are you aware of any of it?

Comment author: entirelyuseless 30 November 2017 01:34:48AM 0 points [-]

You say that seemingly in ignorance that what I said contradicts Less Wrong.

First, you are showing your own ignorance of the fact that not everyone is a cult member like yourself. I have a bet with Eliezer Yudkowsky against one of his main positions and I stand to win $1,000 if I am right and he is mistaken.

Second, "contradicts Less Wrong" does not make sense because Less Wrong is not a person or a position or a set of positions that might be contradicted. It is a website where people talk to each other.

One of the things I said was Taking Children Seriously is important for AGI. Is this one of the truths you refer to?

No. Among other things, I meant that I agreed that AIs will have a stage of "growing up," and that this will be very important for what they end up doing. Taking Children Seriously, on the other hand, is an extremist ideology.

You still can't even state the position correctly.

Since I have nothing to learn from you, I do not care whether I express your position the way you would express it. I meant the same thing. Induction is quite possible, and we do it all the time.

Comment author: Fallibilist 30 November 2017 08:38:25AM 0 points [-]

I meant the same thing. Induction is quite possible, and we do it all the time.

What is the thinking process you are using to judge the epistemology of induction? Does that process involve induction? If you are doing induction all the time then you are using induction to judge the epistemology of induction. How is that supposed to work? And if not, judging the special case of the epistemology of induction is an exception. It is an example of thinking without induction. Why is this special case an exception?

Critical Rationalism does not have this problem. The epistemology of Critical Rationalism can be judged entirely within the framework of Critical Rationalism.

Comment author: Fallibilist 30 November 2017 04:48:14AM 0 points [-]

Second, "contradicts Less Wrong" does not make sense because Less Wrong is not a person or a position or a set of positions that might be contradicted. It is a website where people talk to each other.

No. From About Less Wrong:

The best introduction to the ideas on this website is "The Sequences", a collection of posts that introduce cognitive science, philosophy, and mathematics.

"[I]deas on this website" is referring to a set of positions. These are positions held by Yudkowsky and others responsible for Less Wrong.

No. Among other things, I meant that I agreed that AIs will have a stage of "growing up," and that this will be very important for what they end up doing. Taking Children Seriously, on the other hand, is an extremist ideology.

Taking AGI Seriously is therefore also an extremist ideology? Taking Children Seriously says you should always, without exception, be rational when raising your children. If you reject TCS, you reject rationality. You want to use irrationality against your children when it suits you. You become responsible for causing them massive harm. It is not extremist to try to be rational, always. It should be the norm.

Comment author: Fallibilist 29 November 2017 01:55:11AM *  0 points [-]

Curi knows things that you don’t. He knows that LW is wrong about some very important things and is trying to correct that. These things LW is wrong about are preventing you making progress. And furthermore, LW does not have effective means for error correction, as curi has tried to explain, and that in itself is causing problems.

Curi is not alone thinking LW is majorly wrong in some important areas. Others do too, including David Deutsch, whom curi has had many many discussions with. I do too, though no doubt there are people here who will say I am just a sock-puppet of curi’s.

curi is not some cheap salesman trying to flog ideas. He is trying to save the world. He is trying to do that by getting people to think better. He has spent years thinking about this problem. He has written tens-of-thousands of posts in many forums, sought out the best people to have discussions with, and addresses all criticisms. He has made himself way more open than anyone to receiving criticism. When millions of people think better, big problems like AGI will be solved faster.

curi right now is the world’s leading expert on epistemology. he got that way not by seeking status and prestige or publications in academic journals but by relentlessly pursuing the truth. All the ideas he holds to be true he has subjected to a furnace of criticism and he has changed his ideas when they could not withstand criticism. And if you can show to very high standards why CR is wrong, curi will concede and change his ideas again.

You have no idea about curi’s intellectual history and what he is capable of. He is by far the best thinker I have ever encountered. He has revealed here only a very tiny fraction of what he knows.

Take him seriously. curi is a resource LW needs.

Comment author: Lumifer 29 November 2017 06:39:40AM 1 point [-]

This is so ridiculously bombastic, it's funny.

So what have this Great Person achieved in real life? Besides learning Ruby and writing some MtG guides? Given that he is Oh So Very Great, surely he must left his mark on the world already. Where is that mark?

Comment author: Fallibilist 29 November 2017 07:19:30AM *  0 points [-]

So what have this Great Person achieved in real life? Besides learning Ruby and writing some MtG guides?

If you want to be a serious thinker and make your criticisms better, you really need to improve your research skills. That comment is lazy, wrong, and hostile. Curi invented Paths Forward. He invented Yes/No philosophy, which is an improvement on Popper's Critical Preferences. He founded Fallible Ideas. He kept Taking Children Seriously alive. He has written millions of words on philosophy and added a lot of clarity to ideas by Popper, Rand, Deutsch, Godwin, and so on. He used his philosophy skills to become a world-class gamer ...

Given that he is Oh So Very Great, surely he must left his mark on the world already. Where is that mark?

Again, you show your ignorance. Are you aware of the battles great ideas and great people often face?Think of the ignorance and hostility that is directed at Karl Popper and Ayn Rand. Think of the silence that met Hugh Everett. These things are common. To quote curi:

It’s hard to criticize your intellectual betters, but easy to misunderstand and consequently vilify them. More generally, people tend to be hostile to outliers and sympathize with more conventional and conformist stuff – even though most great new ideas, and great men, are outliers.

Comment author: Lumifer 29 November 2017 09:03:30PM 0 points [-]

He used his philosophy skills to become a world-class gamer

Gold! This is solid gold!

Are you aware of the battles great ideas and great people often face?

Have you considered becoming a stand-up comedian?

Comment author: Fallibilist 30 November 2017 02:02:33AM 0 points [-]

Why are you here? What interest do you have in being Less Wrong? The world is burning and you're helping spread the fire.

Comment author: entirelyuseless 29 November 2017 02:15:42AM 1 point [-]

"He is by far the best thinker I have ever encountered. "

That is either because you are curi, and incapable of noticing someone more intelligent than yourself, or because curi is your cult leader.

Comment author: Fallibilist 29 November 2017 02:56:22AM 0 points [-]

What if you are wrong? What then?

Comment author: Lumifer 29 November 2017 06:40:11AM 5 points [-]

The interesting thing is that the answer is "nothing". Nothing at all.

Comment author: curi 29 November 2017 10:03:26AM 0 points [-]

If you're wrong you get to avoidably burn in hell. Your life is at stake, which you call "nothing".

Comment author: Elo 29 November 2017 09:42:23AM 0 points [-]

Or maybe the answer is that progress can be slow.

Comment author: entirelyuseless 29 November 2017 03:11:06PM 0 points [-]

As Lumifer said, nothing. Even if I were wrong about that, your general position would still be wrong, and nothing in particular would follow.

I notice though that you did not deny the accusation, and most people would deny having a cult leader, which suggests that you are in fact curi. And if you are not, there is not much to be wrong about. Having a cult leader is a vague idea and does not have a "definitely yes" or "definitely no" answer, but your comment exactly matches everything I would want to call having a cult leader.

Comment author: entirelyuseless 29 November 2017 03:17:28PM 0 points [-]

though no doubt there are people here who will say I am just a sock-puppet of curi’s.

And by the way, even if I were wrong about you being curi or a cult member, you are definitely and absolutely just a sock-puppet of curi's. That is true even if you are a separate person, since you created this account just to make this comment, and it makes no difference whether curi asked you to do that or if you did it because you care so much about his interests here. Either way, it makes you a sock-puppet, by definition.

Comment author: curi 29 November 2017 09:50:28AM 0 points [-]

thx for trying, anon.

Comment author: Kaj_Sotala 29 November 2017 01:17:51PM *  2 points [-]

Why should I do this for you? Do you think you have any value to offer me, and if so what?

You have it the wrong way around. This is something that you do for yourself, in order to convince other people that you have value to offer for them.

You're the one who needs to convince your readers that your work is worth engaging with. If you're not willing to put in the effort needed to convince potential readers of the value of your work, then the potential readers are going to ignore you and instead go read someone who did put in that effort.

Comment author: curi 29 November 2017 08:03:03PM 0 points [-]

I already did put work into that. Then they refused to read references, for unstated reasons, and asked me to rewrite the same things I already wrote, as well as rewrite things written by Popper and others. I don't want to put in duplicate work.

Comment author: Kaj_Sotala 02 December 2017 12:37:20PM 1 point [-]

Any learning - including learning how to communicate persuasively - requires repeated tries, feedback, and learning from feedback. People are telling you what kind of writing they might find more persuasive, which is an opportunity for you to learn. Don't think of it as duplicate work, think of it as repeatedly iterating a work and gradually getting towards the point where it's persuasive to your intended audience. Because until you can make it persuasive, the work isn't finished, so it's not even duplicating anything. Just finishing what you originally started.

Of course, if you deem that to be too much effort, that's fair. But the world is full of writers who have taken the opportunity to learn and hone their craft until they could clearly communicate to their readers why their work is worth reading. If you don't, then you can't really blame your potential readers for not bothering to read your stuff - there are a lot of things that people could be reading, and it's only rational for them to focus on the stuff that shows the clearest signs of being important or interesting.

Comment author: curi 02 December 2017 01:24:35PM *  0 points [-]

again: i and others already wrote it and they don't want to read it. how will writing it again change anything? they still won't want to read it. this request for new material makes no sense whatsoever. it's not that they read the existing material and have some complaint and want it to be better in some way, they just won't read.

your community as a whole has no answer to some fairly famous philosophers and doesn't care. everyone is just like "they don't look promising" and doesn't have arguments.

Comment author: ZeitPolizei 03 December 2017 04:07:28AM 0 points [-]

how will writing it again change anything?

Why should anyone answer this question? Kaj has already written an answer to this question above, but you don't understand it. How will writing it again change anything? You still won't understand it. This request for an explanation makes no sense whatsoever. It's not that you understand the answer and have some complaint and want it to be better in some way, you just won't understand.

You claim you want to be told when you're mistaken, but you completely dismiss any and all arguments. You're just like "these people obviously haven't spent hundreds of hours learning and thinking about CR, so there is no way they can have any valid opinion about it" and won't engage their arguments on a level so that they are willing to listen and able to understand.

Comment author: curi 03 December 2017 02:55:03PM 0 points [-]

Do you want new material which is the same as previous material, or different? If the same, I don't get it. if different, in what ways and why?

Comment author: username2 24 November 2017 10:46:25AM 1 point [-]

B+ Too brief.

Comment author: Viliam 01 December 2017 02:03:07PM *  0 points [-]

Disclosure: I didn't read Popper in original (nor do I plan to in the nearest future; sorry, other priorities), I just had many people mention his name to me in the past, usually right before they shot themselves in their own foot. It typically goes like this:

There is a scientific consensus (or at least current best guess) about X. There is a young smart person with their pet theory Y. As the first step, they invoke Popper to say that science didn't actually prove X, because it is not the job of science to actually prove things; science can merely falsify hypotheses. Therefore, the strongest statement you can legitimately make about X is: "So far, science has not falsified X". Which is coincidentally also true about Y (or about any other theory you make up on the spot). Therefore, from the "naively Popperian" perspective, X and Y should have equal status in the eyes of science. Except that so far, much more attention and resources have been thrown at X, and it only seems fair to throw some attention and resources at Y now; and if scientists refuse to do that, well, they fail at science. Which should not be surprising at all, because it is known that scientists generally fail at science; <insert reference to Nassim Taleb, Malcolm Gladwell, or Stephen Jay Gould>.

After reading your summary of Popper (thanks, JenniferRM), my impression is that Popper did a great job debunking some mistaken opinions about science; but ironically, became himself an often-quoted source for other mistaken opinions about science. (I should probably not blame Popper here, but rather the majority of his fans.)

The naive version of science (unfortunately, still very popular in humanities) that Popper refuted goes approximately like this (of course, lot of simplification):

The scientist reads a lot of scientific texts written by other scientists. After a few years, the scientist starts seeing some patterns in the nature. He or she makes an experiment or two which seem to fit the pattern, and describes those patterns and experiments on paper. Their colleagues are impressed by the description; the paper passes peer review, becomes published in a scientific journal, and becomes a new scientific text that the following generations of scientists will study. Now the case is closed, and anyone who doubts the description will face the wrath of the scientific community. (At least until later a higher-status scientist publishes an opposite statement, in which case the history is rewritten, and the new description becomes the scientific fact.)

And the "naively Popperian" opposite perspective (again, simplified a lot) goes like this:

Scientists generate hypotheses by an unspecified process. It is a deeply mysterious process, about which nothing specific is allowed to be said, because that would be unscientific. It is only required that the hypotheses be falsifiable in principle. Then you keep throwing resources at them. Some of them get falsified, some keep surviving. And all that a good scientist is allowed to say about them is "this hypothesis was falsified" or "this hypothesis was not falsified yet". Anything beyond that is failing at science. For example, saying "Well, this goes against almost everything we know about nature, is incredibly complicated, and while falsifiable in principle, it would require a budget of $10^10 and some technology that doesn't even exist yet, so... why are we even talking about this, when we have a much simpler theory that is well-supported by current experiments?" is something that a real scientist would never do.

I admit that perhaps, given unlimited amount of resources, we could do science in the "naively Popperian" way. (This is how AIXI would do it, perhaps to its own detriment.) But this is not how actual science works in real life; and not even how idealized science with fallible-but-morally-flawless scientists could work. In real life, the probability of tested hypothesis is better than random. For example, if there is a 1 : 1000000 chance that a random molecule could cure a disease X, it usually requires much less that 1000000 studies to find the cure for X. (A pharmaceutical company with a strategy "let's try random molecules and do scientific studies whether they cure X" would go out of business. Even a PhD student throwing together random sequences of words and trying to falsify them would probably fail to get their PhD.) Falsification can be the last step in the game, but it's definitely not the only step.

If I can make an analogy with evolution (of course, analogies can only get us so far, then they break), induction and falsification are to science what mutation and selection are to evolution. Without selection, we would get utter chaos, filled by mostly dysfunctional mutants (or more like just unliving garbage). But without mutation, at best we would get "whatever was the fittest in the original set". Note that a hypothetical super-mutation where the original organism would be completely disassembled to atoms, and then reconstructed in a completely original random way, would also fail to produce living organisms (until we would throw unlimited resources at the process, which would get us all possible organisms). On the other hand, if humans create an unnatural (but capable of surviving) organism in a lab and release it in the wild, evolution can work with that, too.

Similarly, without falsification, science would be reduced to yet another channel for fashionable dogma and superstition. But without some kind of induction behind the scenes, it would be reduced to trying random hypotheses, and failing at every hypothesis longer than 100 words. And again, if you derive a hypothesis by a method other than induction, science can work with that, too. It's just, the less the new hypothesis is related to what we already know about the nature, the smaller the chance it could be right. So in real life, most new hypotheses that survive the initial round of falsifications are generated by something like induction. We may not talk about it, but that's how it is. It is also a reason why scientists study existing science before inventing their own hypotheses. (In a hypothetical world where induction does not work, all they would have to do is study the proper methods of falsification.)

Related chapter of the Less Wrong Sequences: "Einstein's Arrogance".

tl;dr -- "induction vs falsification" is a false dilemma

(BTW, I agree with gjm's reponse to your last reply in our previous discussion, so I am not going to write my own.)

EDIT: By the way, there is a relatively simple way to cheat the falsifiability criterium by creating a sequence of hypotheses, where each one of them is individually technically falsifiable, but the sequence as a whole is not. So when the hypothesis H42 gets falsified, you just move to hypothesis H43 and point out that H43 is falsifiable (and different from H42, therefore the falsification of H42 is be irrelevant in this debate), and demand that scientists either investigate H43 or admit that they are dogmatic and prejudiced against you.

As an example, let hypothesis H[n] be: "If you accelerate a proton to 1 - 1/10^n of speed of light, a Science Fairy will appear and give you a sticker." Suppose we have experimentally falsified H1, H2, and H3; what would that say about H4 or say H99? (Bonus points if you can answer this question without using induction.)

Comment author: curi 01 December 2017 07:22:17PM *  0 points [-]

The sequence idea doesn't work b/c you can criticize sequences or categories as a whole, criticism doesn't have to be individualized (and typically shouldn't be – you want criticisms with some generality).

Most falsifiable hypotheses are rejected for being bad explanations, containing internal contradictions, or other issues – without empirical investigation. This is generally cheaper and is done with critical argument. If someone can generate a sequence of ideas you don't know of any critical arguments against, then you actually do need some better critical arguments (or else they're actually good idea). But your example is trivial to criticize – what kind of science fairy, why will it appear in that case, if you accelerate a proton past a speed will that work or does it have to stay at the speed for a certain amount of time? does the fairy or sticker have mass or energy and violate a conservation law? It's just arbitrary, underspecified nonsense.


most ppl who like most things are not so great. that works for Popper, induction, socialism, Objectivism, Less Wrong, Christianity, Islam, whatever. your understanding of Popper is incorrect, and your experiences do not give you an accurate picture of Popper's work. meanwhile, you don't know of a serious criticism of CR by someone who does know what they're talking about, whereas I do know of a serious criticism of induction which y'all don't want to address.

If you look at the Popper summary you linked, it has someone else's name on it, and it isn't on my website. This kind of misattribution is the quality of scholarship I'm dealing with here. anyway here is an excerpt from something i'm currently in the process of writing.

(it says "Comment too long" so i'm going to try putting it in a reply comment, and if that doesn't work i'll pastebin it and edit in the link. it's only 1500 words.)

Comment author: curi 01 December 2017 07:35:31PM 0 points [-]

Critical Rationalism (CR)

CR is an epistemology developed by 20th century philosopher Karl Popper. An epistemology is a philosophical framework to guide effective thinking, learning, and evaluating ideas. Epistemology says what reason is and how it works (except the epistemologies which reject reason, which we’ll ignore). Epistemology is the most important intellectual field, because reason is used in every other field. How do you figure out which ideas are good in politics, physics, poetry or psychology? You use the methods of reason! Most people don’t have a very complete conscious understanding of their epistemology (how they think reason works), and haven’t studied the matter, which leaves them at a large intellectual disadvantage.

Epistemology offers methods, not answers. It doesn’t tell you which theory of gravity is true, it tells you how to productively think and argue about gravity. It doesn’t give you a fish or tell you how to catch fish, instead it tells you how to evaluate a debate over fishing techniques. Epistemology is about the correct methods of arguing, truth-seeking, deciding which ideas make sense, etc. Epistemology tells you how to handle disagreements (which are common to every field).

CR is general purpose: it applies in all situations and with all types of ideas. It deals with arguments, explanations, emotions, aesthetics – anything – not just science, observation, data and prediction. CR can even evaluate itself.

Fallibility

CR is fallibilist rather than authoritarian or skeptical. Fallibility means people are capable of making mistakes and it’s impossible to get a 100% guarantee that any idea is true (not a mistake). And mistakes are common so we shouldn’t try to ignore fallibility (it’s not a rare edge case). It’s also impossible to get a 99% or even 1% guarantee that an idea is true. Some mistakes are unpredictable because they involve issues that no one has thought of yet.

There are decisive logical arguments against attempts at infallibility (including probabilistic infallibility).

Attempts to dispute fallibilism are refuted by a regress argument. You make a claim. I ask how you guarantee the claim is correct (even a 1% guarantee). You make a second claim which gives some argument to guarantee the correctness of the first claim (probabilistically or not). No matter what you say, I ask how you guarantee the second claim is correct. So you make a third claim to defend the second claim. No matter what you say, I ask how you guarantee the correctness of the third claim. If you make a fourth claim, I ask you to defend that one. And so on. I can repeat this pattern infinitely. This is an old argument which no one has ever found a way around.

CR’s response to this is to accept our fallibility and figure out how to deal with it. But that’s not what most philosophers have done since Aristotle.

Most philosophers think knowledge is justified, true belief, and that they need a guarantee of truth to have knowledge. So they have to either get around fallibility or accept that we don’t know anything (skepticism). Most people find skepticism unacceptable because we do know things – e.g. how to build working computers and space shuttles. But there’s no way around fallibility, so philosophers have been deeply confused, come up with dumb ideas, and given philosophy a bad name.

So philosophers have faced a problem: fallibility seems to be indisputable, but also seems to lead to skepticism. The way out is to check your premises. CR solves this problem with a theory of fallible knowledge. You don’t need a guarantee (or probability) to have knowledge. The problem was due to the incorrect “justified, true belief” theory of knowledge and the perspective behind it.

Justification is the Major Error

The standard perspective is: after we come up with an idea, we should justify it. We don’t want bad ideas, so we try to argue for the idea to show it’s good. We try to prove it, or approximate proof in some lesser way. A new idea starts with no status (it’s a mere guess, hypothesis, speculation), and can become knowledge after being justified enough.

Justification is always due to some thing providing the justification – be it a person, a religious book, or an argument. This is fundamentally authoritarian – it looks for things with authority to provide justification. Ironically, it’s commonly the authority of reasoned argument that’s appealed to for justification. Which arguments have the authority to provide justification? That status has to be granted by some prior source of justification, which leads to another regress.

Fallible Knowledge

CR says we don’t have to justify our beliefs, instead we should use critical thinking to correct our mistakes. Rather than seeking justification, we should seek our errors so we can fix them.

When a new idea is proposed, don’t ask “How do you know it?” or demand proof or justification. Instead, consider if you see anything wrong with it. If you see nothing wrong with it, then it’s a good idea (knowledge). Knowledge is always tentative – we may learn something new and change our mind in the future – but that doesn’t prevent it from being useful and effective (e.g. building space shuttles that successfully reach the moon). You don’t need justification or perfection to reach the moon, you just need to fix errors with your designs until they’re good enough to work. This approach avoids the regress problems and is compatible with fallibility.

The standard view said, “We may make mistakes. What should we do about that? Find a way to justify an idea as not being a mistake.” But that’s impossible.

CR says, “We may make mistakes. What should we do about that? Look for our mistakes and try to fix them. We may make mistakes while trying to correct our mistakes, so this is an endless process. But the more we fix mistakes, the more progress we’ll make, and the better our ideas will be.”

Guesses and Criticism

Our ideas are always fallible, tentative guesses with no special authority, status or justification. We learn by brainstorming guesses and using critical arguments to reject bad guesses. (This process is literally evolution, which is the only known answer to the very hard problem of how knowledge can be created.)

How do you know which critical arguments are correct? Wrong question. You just guess it, and the critical arguments themselves are open to criticism. What if you miss something? Then you’ll be mistaken, and hopefully figure it out later. You must accept your fallibility, perpetually work to find and correct errors, and still be aware that you are making some mistakes without realizing it. You can get clues about some important, relevant mistakes because problems come up in your life (indicating to direct more attention there and try to improve something).

CR recommends making bold, clear guesses which are easier to criticize, rather than hedging a lot to make criticism difficult. We learn more by facilitating criticism instead of trying to avoid it.

Science and Evidence

CR pays extra attention to science. First, CR offers a theory of what science is: a scientific idea is one which could be contradicted by observation because it makes some empirical claim about reality.

Second, CR explains the role of evidence in science: evidence is used to refute incorrect hypotheses which are contradicted by observation. Evidence is not used to support hypotheses. There is evidence against but no evidence for. Evidence is either compatible with a hypothesis, or not, and no amount of compatible evidence can justify a hypothesis because there are infinitely many contradictory hypotheses which are also compatible with the same data.

These two points are where CR has so far had the largest influence on mainstream thinking. Many people now see science as being about empirical claims which we then try to refute with evidence. (Parts of this are now taken for granted by many people who don’t realize they’re fairly new ideas.)

CR also explains that observation is selective and interpreted. We first need ideas to decide what to look at and which aspects of it to pay attention to. If someone asks you to “observe”, you have to ask them what to observe (unless you can guess what they mean from context). The world has more places to look, with more complexity, than we can keep track of. So we have to do a targeted search according to some guesses about what might be productive to investigate. In particular, we often look for evidence that would contradict (not support) our hypotheses in order to test them and try to correct our errors.

We also need to interpret our evidence. We don’t see puppies, we see photons which we interpret as meaning there is a puppy over there. This interpretation is fallible – sometimes people are confused by mirrors, mirages (where blue light from the sky goes through the hotter air near the ground then up to your eyes, so you see blue below you and think you found an oasis), fog (you can mistakenly interpret whether you did or didn’t see a person in the fog), etc.

Comment author: Viliam 02 December 2017 01:50:10AM *  0 points [-]

Seems like these "critical arguments" do a lot of heavy lifting.

Suppose you make a critical argument against my hypothesis, and the arguments feels smart to you, but silly to me. I make a counter-argument, which to me feels like it completely demolished your position, but in your opinion it just shows how stupid I am. Suppose the following rounds of arguments are similarly fruitless.

Now what?

In a situation between a smart scientist who happens to be right, and a crackpot that refuses admitting the smallest mistake, how would you distinguish which is which? The situation seems symmetrical; both sides are yelling at each other, no progress on either side.

Would you decide by which argument seems more plausible to you? Then you are just another person in a 3-people ring, and the current balance of powers happens to be 2:1. Is this about having a majority?

Or would you decide that "there is no answer" is the right answer? In that case, as long as there remains a single crackpot on this planet, we have a scientific controversy. (You can't even say that the crackpot is probably wrong, because that would be probabilistic reasoning.)

You must accept your fallibility, perpetually work to find and correct errors, and still be aware that you are making some mistakes without realizing it.

Seems to me you kinda admit that knowledge is ultimately uncertain (i.e. probabilistic), but you refuse to talk about probabilities. (Related LW concept: "Fallacy of gray ".) We are fallible, but it is wrong to make a guess how much. We resolve experimentally uncertain hypotheses by verbal fights, which we pretend to have exactly one of three outcomes: "side A lost", "side B lost", "neither side lost"; nothing in between, such as "side A seems 3x more convincing than side B". I mean, if you start making too many points on a line, it would start to resemble a continuum, and your argument seems to be that there is no quantitative certainty, only qualitative; that only 0, 1, and 0.5 (or perhaps NaN) are valid probabilities of a hypothesis.

Okay, I feel like am already repeating myself.

Comment author: curi 02 December 2017 05:38:37AM 0 points [-]

Is the crackpot being responsive to the issues and giving arguments – arguments are what matter, not people – or is he saying non-sequiturs and refusing to address questions? If he speaks to the issues we can settle it quickly; if not, he isn't participating and doesn't matter. If we disagree about the nature of what's taking place, it can be clarified, and I can make a judgement which is open to Paths Forward. You seem to wish to avoid the burden of this judgement by hedging with a "probably".

Fallibility isn't an amount. Correct arguments are decisive or not; confusion about this is commonly due to vagueness of problem and context (which are not matters of probability and cannot be accurately summed up that way). See https://yesornophilosophy.com

Comment author: Viliam 02 December 2017 03:18:50PM *  2 points [-]

I wish to conclude this debate somehow, so I will provide something like a summary:

If I understand you correctly, you believe that (1) induction and probabilities are unacceptable for science or "critical rationalism", and (2) weighing evidence can be replaced by... uhm... collecting verbal arguments and following a flowchart, while drawing a tree of arguments and counter-arguments (hopefully of a finite size).

I believe that you are fundamentally wrong about this, and that you actually use induction and probabilities.

First, because without induction, no reasoning about the real world is possible. Do you expect that (at least approximately) the same laws of physics apply yesterday, today, and tomorrow? If they don't, then you can't predict anything about the future (because under the hypothetical new laws of physics, anything could happen). And you even can't say anything about the past, because all our conclusions about the past are based on observing what we have now, and expecting that in the past it was exposed to the same laws of physics. Without induction, there is no argument against "last Thursdayism".

Second, because although to refuse to talk about probabilities, and definitely object against using any numbers, some expressions you use are inherently probabilistic; you just insist on using vague verbal descriptions, which more or less means rounding the scale of probability from 0% to 100% into a small number of predefined baskets. There is a basket called "falsified", a basket called "not falsified, but refuted by a convincing critical argument", a basket called "open debate; there are unanswered critical arguments for both sides", and a basket called "not falsified, and supported by a convincing critical argument". (Well, something like that. The number and labels of the baskets are most likely wrong, but ultimately, you use a small number of baskets, and a flowchart to sort arguments into their respective baskets.) To me, this sounds similar to refusing to talk about integers, and insisting that the only scientifically valid values are "zero", "one", "a few", and "many". I believe that in real life you can approximately distinguish whether you chance of being wrong is more in the order of magnitude "one in ten" or "one in a million". But your vocabulary does not allow to make this distinction; there is only the unspecific "no conclusion" and the unspecific "I am not saying it's literally 100% sure, but generally yes"; and at some point of the probability scale you will make the arbitrary jump from the former to the latter, depending on how convincing is the critical argument.

On your website, you have a strawman powerpoint presentation about how people measure "goodness of an idea" by adding or removing goodness points, on a scale 0-100. Let me tell you that I have never seen anyone using or supporting that type of scale; neither on Less Wrong, nor anywhere else. Specifically, Bayes Theorem is not about "goodness" of an idea; it is about mathematical probability. Unlike "goodness", probabilities can actually be calculated. If you put 90 white balls and 10 black balls in a barrel, the probability of randomly drawing a white ball is 90%. If there is one barrel containing 90 white balls and 10 black balls, and another barrel containing 10 white balls and 90 black balls, and you choose a random barrel, randomly draw five balls, and get e.g. four white balls and one black ball, you can calculate the probability of this being the first or the second barrel. It has nothing to do with "goodness" of the idea "this is the first barrel" or "this is the second barrel".

My last observation is that your methodology of "let's keep drawing the argument tree, until we reach the conclusion" allows you to win debates by mere persistence. All you have to do is keep adding more and more arguments, until your opponent says "okay, that's it, I also have other things to do". Then, according to your rules, you have won the debate; now all nodes at the bottom of the tree are in favor of your argument. (Which is what I also expect to happen right now.)

And that's most likely all from my side.

Comment author: Fallibilist 02 December 2017 10:56:40PM *  1 point [-]

I believe that you are fundamentally wrong about this, and that you actually use induction and probabilities.

This is the old argument that CR smuggles induction in via the backdoor. Critical Rationalists have given answers to this argument. Search, for example, what Rafe Champion has to say about induction smuggling. Why have you not done research about this before commenting? You point is not original.

First, because without induction, no reasoning about the real world is possible. Do you expect that (at least approximately) the same laws of physics apply yesterday, today, and tomorrow? If they don't, then you can't predict anything about the future (because under the hypothetical new laws of physics, anything could happen).

Are you familiar with what David Deutsch had to say about this in, for example, The Fabric of Reality? Again, you have not done any research and you are not making any new points which have not already been answered.

Specifically, Bayes Theorem is not about "goodness" of an idea; it is about mathematical probability. Unlike "goodness", probabilities can actually be calculated. If you put 90 white balls and 10 black balls in a barrel, the probability of randomly drawing a white ball is 90%. If there is one barrel containing 90 white balls and 10 black balls, and another barrel containing 10 white balls and 90 black balls, and you choose a random barrel, randomly draw five balls, and get e.g. four white balls and one black ball, you can calculate the probability of this being the first or the second barrel. It has nothing to do with "goodness" of the idea "this is the first barrel" or "this is the second barrel".

Critical Rationalists have also given answers to this, including Elliot Temple himself. CR has no problem with the probabilities of events - which is what your example is about. But theories are not events and you cannot associate probabilities with theories. You have still not made an original point which has not been discussed previously.

Why do you think that some argument which crosses your mind hasn't already been discussed in depth? Do you assume that CR is just some mind-burp by Popper that hasn't been fully fleshed out?

Comment author: curi 02 December 2017 11:05:12PM *  0 points [-]

they've never learned or dealt with high-quality ideas before. they don't think those exist (outside certain very specialized non-philosophy things mostly in science/math/programming) and their methods of dealing with ideas are designed accordingly.

Comment author: curi 02 December 2017 04:11:01PM *  0 points [-]

You are grossly ignorant of CR, which you grossly misrepresent, and you want to reject it without understanding it. The reasons you want to throw it out while attacking straw men are unstated and biased. Also, you don't have a clear understanding of what you mean by "induction" and it's a moving target. If you actually had a well-defined, complete position on epistemology I could tell you what's logically wrong with it, but you don't. For epistemology you use a mix of 5 different versions of induction (all of which together still have no answers to many basic epistemology issues), a buggy version of half of CR, as well as intuition, common sense, what everyone knows, bias, common sense, etc. What an unscholarly mess.

What you do have is more ability to muddy the waters than patience or interest in thinking. That's a formula for never knowing you lost a debate, and never learning much. It's understandable that you're bad at learning about new ideas, bad at organizing a discussion, bad at keeping track of what was said, etc, but it's unreasonable that, due your inability to discuss effectively, you blame CR methodology for the discussion not reaching a conclusion fast enough and quit. The reason you think you've found more success when talking with other people is because you find people who already agree with you about more things before you the discussion starts.

Comment author: Lumifer 01 December 2017 04:43:31PM 0 points [-]

A pharmaceutical company with a strategy "let's try random molecules and do scientific studies whether they cure X" would go out of business.

Funny you should mention this.

Eve is designed to automate early-stage drug design. First, she systematically tests each member from a large set of compounds in the standard brute-force way of conventional mass screening. The compounds are screened against assays (tests) designed to be automatically engineered, and can be generated much faster and more cheaply than the bespoke assays that are currently standard. ...Eve’s robotic system is capable of screening over 10,000 compounds per day.

source

Comment author: gjm 23 November 2017 12:40:02PM 0 points [-]

At one point in that discussion curi says the following, about me:

and then he was hostile to concepts like keeping track of what points he hadn't answered or talking about discussion methodology itself. he was also, like many people, hostile to using references.

I'd just like to say, for the record, that that is not an accurate characterization of my opinion or attitudes, and I do not believe it is an accurate characterization of my words either. What is true is that we'd been talking about various Popperish things, and then curi switched to only wanting to talk about my alleged deficiencies in rational conduct and about his "Paths Forward" methodology. I wasn't interested in discussing those (I've no general objection to talking about discussion methodology, but I didn't want to have that conversation with curi on that occasion) and he wasn't willing to discuss anything else.

I still have no idea what "hostile to using references" is meant to mean.

Comment author: Lumifer 30 November 2017 02:12:47AM 1 point [-]

I still have no idea what "hostile to using references" is meant to mean.

It means you're unwilling to go to curi's website and read all he has written on the topic when he points you there.

Comment author: gjm 30 November 2017 05:32:47PM *  0 points [-]

Maybe. Though actually I have gone to curi's website (or, rather, websites; he has several) and read his stuff, when it's been relevant to our discussions. But, y'know, I didn't accept Jesus into my life^W^W^W^W the Paths Forward approach, and therefore there's no point trying to engage with me on anything else.

[EDITED to add:] Am I being snarky? Why, yes, I am being snarky. Because I spent hours attempting to have a productive discussion with this guy, and it turned out that he wasn't prepared to do that unless he got to set every detail of the terms of discussion. And also because he took all the discussions he'd had on the LW slack and published them online without anyone's consent (in fact, he asked at least one person "is it OK to post this somewhere else?" and got a negative answer and still did it). For the avoidance of doubt, so far as I know there's nothing particularly incriminating or embarrassing in any of the stuff he posted, but of course the point is that he doesn't get to choose what someone else might be unwilling to have posted in a public place.

Comment author: Lumifer 30 November 2017 06:59:36PM *  0 points [-]

Though actually I have gone to curi's website (or, rather, websites; he has several) and read his stuff

So have I, but curi's understanding of "using references" is a bit more particular than that. Unrolled, it means "your argument has been dealt with by my tens of thousands of words over there [waves hand in the general direction of the website], so we can consider it refuted and now will you please stop struggling and do as I tell you".

Why, yes, I am being snarky.

Embrace your snark and it will set you free! :-D