Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Open Letter to MIRI + Tons of Interesting Discussion

0 curi 22 November 2017 09:16PM

Less Wrong Lacks Representatives and Paths Forward

1 curi 08 November 2017 07:00PM

In my understanding, there’s no one who speaks for LW, as its representative, and is *responsible* for addressing questions and criticisms. LW, as a school of thought, has no agents, no representatives – or at least none who are open to discussion.

 

The people I’ve found interested in discussion on the website and slack have diverse views which disagree with LW on various points. None claim LW is true. They all admit it has some weaknesses, some unanswered criticisms. They have their own personal views which aren’t written down, and which they don’t claim to be correct anyway.

 

This is problematic. Suppose I wrote some criticisms of the sequences, or some Bayesian book. Who will answer me? Who will fix the mistakes I point out, or canonically address my criticisms with counter-arguments? No one. This makes it hard to learn LW’s ideas in addition to making it hard to improve them.

 

My school of thought (Fallible Ideas – FI – https://fallibleideas.com) has representatives and claims to be correct as far as is known (like LW, it’s fallibilist, so of course we may discover flaws and improve it in the future). It claims to be the best current knowledge, which is currently non-refuted, and has refutations of its rivals. There are other schools of thought which say the same thing – they actually think they’re right and have people who will address challenges. But LW just has individuals who individually chat about whatever interests them without there being any organized school of thought to engage with. No one is responsible for defining an LW school of thought and dealing with intellectual challenges.

 

So how is progress to be made? Suppose LW, vaguely defined as it may be, is mistaken on some major points. E.g. Karl Popper refuted induction. How will LW find out about its mistake and change? FI has a forum where its representatives take responsibility for seeing challenges addressed, and have done so continuously for over 20 years (as some representatives stopped being available, others stepped up).

 

Which challenges are addressed? *All of them*. You can’t just ignore a challenge because it could be correct. If you misjudge something and then ignore it, you will stay wrong. Silence doesn’t facilitate error correction. For information on this methodology, which I call Paths Forward, see: https://curi.us/1898-paths-forward-short-summary BTW if you want to take this challenge seriously, you’ll need to click the link; I don’t repeat all of it. In general, having much knowledge is incompatible with saying all of it (even on one topic) upfront in forum posts without using references.

 

My criticism of LW as a whole is that it lacks Paths Forward (and lacks some alternative of its own to fulfill the same purpose). In that context, my criticisms regarding specific points don’t really matter (or aren’t yet ready to be discussed) because there’s no mechanism for them to be rationally resolved.

 

One thing FI has done, which is part of Paths Forward, is it has surveyed and addressed other schools of thought. LW hasn’t done this comparably – LW has no answer to Critical Rationalism (CR). People who chat at LW have individually made some non-canonical arguments on the matter that LW doesn’t take responsibility for (and which often involve conceding LW is wrong on some points). And they have told me that CR has critics – true. But which criticism(s) of CR does LW claim are correct and take responsibility for the correctness of? (Taking responsibility for something involves doing some major rethinking if it’s refuted – addressing criticism of it and fixing your beliefs if you can’t. Which criticisms of CR would LW be shocked to discover are mistaken, and then be eager to reevaluate the whole matter?) There is no answer to this, and there’s no way for it to be answered because LW has no representatives who can speak for it and who are participating in discussion and who consider it their responsibility to see that issues like this are addressed. CR is well known, relevant, and makes some clear LW-contradicting claims like that induction doesn’t work, so if LW had representatives surveying and responding to rival ideas, they would have addressed CR.

 

BTW I’m not asking for all this stuff to be perfectly organized. I’m just asking for it to exist at all so that progress can be made.

 

Anecdotally, I’ve found substantial opposition to discussing/considering methodology from LW people so far. I think that’s a mistake because we use methods when discussing or doing other activities. I’ve also found substantial resistance to the use of references (including to my own material) – but why should I rewrite a new version of something that’s already written? Text is text and should be treated the same whether it was written in the past or today, and whether it was written by someone else or by me (either way, I’m taking responsibility. I think that’s something people don’t understand and they’re used to people throwing references around both vaguely and irresponsibly – but they haven’t pointed out any instance where I made that mistake). Ideas should be judged by the idea, not by attributes of the source (reference or non-reference).

 

The Paths Forward methodology is also what I think individuals should personally do – it works the same for a school of thought or an individual. Figure out what you think is true *and take responsibility for it*. For parts that are already written down, endorse that and take responsibility for it. If you use something to speak for you, then if it’s mistaken *you* are mistaken – you need to treat that the same as your own writing being refuted. For stuff that isn’t written down adequately by anyone (in your opinion), it’s your responsibility to write it (either from scratch or using existing material plus your commentary/improvements). This writing needs to be put in public and exposed to criticism, and the criticism needs to actually get addressed (not silently ignored) so there are good Paths Forward. I hoped to find a person using this method, or interested in it, at LW; so far I haven’t. Nor have I found someone who suggested a superior method (or even *any* alternative method to address the same issues) or pointed out a reason Paths Forward doesn’t work.

 

Some people I talked with at LW seem to still be developing as intellectuals. For lots of issues, they just haven’t thought about it yet. That’s totally understandable. However I was hoping to find some developed thought which could point out any mistakes in FI or change its mind. I’m seeking primarily peer discussion. (If anyone wants to learn from me, btw, they are welcome to come to my forum. It can also be used to criticize FI. http://fallibleideas.com/discussion-info) Some people also indicated they thought it’d be too much effort to learn about and address rival ideas like CR. But if no one has done that (so there’s no answer to CR they can endorse), then how do they know CR is mistaken? If CR is correct, it’s worth the effort to study! If CR is incorrect, someone better write that down in public (so CR people can learn about their errors and reform; and so perhaps they could improve CR to no longer be mistaken or point out errors in the criticism of CR.)

 

One of the issues related to this dispute is I believe we can always proceed with non-refuted ideas (there is a long answer for how this works, but I don’t know how to give a short answer that I expect LW people to understand – especially in the context of the currently-unresolved methodology dispute about Paths Forward). In contrast, LW people typically seem to accept mistakes as just something to put up with, rather than something to try to always fix. So I disagree with ignoring some *known* mistakes, whereas LW people seem to take it for granted that they’re mistaken in known ways. Part of the point of Paths Forward is not to be mistaken in known ways.

 

Paths Forward is a methodology for organizing schools of thought, ideas, discussion, etc, to allow for unbounded error correction (as opposed to typical things people do like putting bounds on discussions, with discussion of the bounds themselves being out of bounds). I believe the lack of Paths Forward at LW is preventing the resolution of other issues like about the correctness of induction, the right approach to AGI, and the solution to the fundamental problem of epistemology (how new knowledge can be created).

[Link] AGI

0 curi 05 November 2017 08:20PM

[Link] Intent of Experimenters; Halting Procedures; Frequentists vs. Bayesians

1 curi 04 November 2017 07:13PM

[Link] Simple refutation of the ‘Bayesian’ philosophy of science

1 curi 01 November 2017 06:54AM

Questions about AGI's Importance

0 curi 31 October 2017 08:50PM

Why expect AGIs to be better at thinking than human beings? Is there some argument that human thinking problems are primarily due to hardware constraints? Has anyone here put much thought into parenting/educating AGIs?

[Link] Reason and Morality: Philosophy Outline with Links for Details

0 curi 30 October 2017 11:33PM

David Deutsch on How To Think About The Future

4 curi 11 April 2011 07:08AM

http://vimeo.com/22099396

What do people think of this, from a Bayesian perspective?

It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks

The Conjunction Fallacy Does Not Exist

-38 curi 10 April 2011 10:35PM

The conjunction fallacy says that people attribute higher probability to X&Y than to Y.

This is false and misleading. It is based on bad pseudo-scientific research designed to prove that people are biased idiots. One of the intended implications, which the research does nothing to address, is that this is caused by genetics and isn't something people can change except by being aware of the bias and compensating for it when it will happen.

In order to achieve these results, the researchers choose X, Y, and the question they ask in a special way. Here's what they don't ask:

What's more likely this week, both a cure for cancer and a flood, or a flood?

Instead they do it like this:

http://en.wikipedia.org/wiki/Conjunction_fallacy

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

Linda is a bank teller.

Linda is a bank teller and is active in the feminist movement.

Or like this:

http://lesswrong.com/lw/ji/conjunction_fallacy/

"A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983."

"A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983."

These use different tricks. But both are biased in a way that biases the results.

By the way, this is a case of the general phenomenon that bad research often gets more impressive results, which is explained in _The Beginning of Infinity_ by David Deutsch. If they weren't bad researchers and didn't bias their research, they would have gotten a negative result and not had anything impressive to publish.

The trick with the first one is that the second answer is more evidence based than the first one. The first answer choice has nothing to do with the provided context. The second answer choice has something to do with the provided context: it is partially evidence based. Instead of taking the question really literally as to be about the mathematics of probability, they are deciding which answer makes more sense and saying that. The first one makes no sense (having nothing to do with the provided information). The second one partially makes sense, so they say it's better.

A more literally minded person would catch on to the trick. But so what? Why should people learn to split hairs so that they can give literally correct answers to bad and pointless questions? That's not a useful skill so most people don't learn it.

The trick with the second one is that the second answer is a better explanation. The first part provides a reason for the second part to happen. Claims that have explanatory reasons are better than claims that don't. People are helpfully reading "and" as expressing a relationship -- just as they would do if their friend asked them about the possibility of Russia invading Poland and the US suspending diplomacy. They think the two parts are relevant, and make sense together. With the first one, they don't see any good explanation offered so they reject the idea. Did it happen for no reason? Bad claim. Did it happen without an invasion of Poland or any other notable event worth mentioning? Bad claim.

People are using valuable real life skills, such as looking for good explanations and trying to figure out what reasonable question people intend to ask, rather than splitting hairs. This is not a horrible bias about X&Y being more likely than Y. It's just common sense. All the conjunction fallacy research shows is that you can miscommunicate with people and then and then blame them for the misunderstanding you caused. If you speak in a way such that you can reasonably expect to be misunderstood, you can then say people are wrong for not giving correct answers to what you meant and failed to communicate to them.

The conjunction fallacy does not exist, as it claims to, for all X and all Y. That it does exist for specially chosen X, Y and context is incapable of reaching the stated conclusion that it exists for all X and Y. The research is wrong and biased. It should become less wrong by recanting.

This insight was created by philosophical thinking of the type explained in _The Beginning of Infinity_ by David Deutsch. It was not created by empirical research, prediction, or Bayesian epistemology. It's one of many examples of how good philosophy leads to better results and helps us spot mistakes instead of making them. It also wasn't discovered by empirical research. As Deutsch explained, bad explanations can be rejected without testing, and testing them is pointless anyway (because they can just make ad hoc retreats to other bad explanations to avoid refutation by the data. Only good explanations can't do that.).

Please correct me if I'm wrong. Show me an unbiased study on this topic and I'll concede.

Do people think in a Bayesian or Popperian way?

-22 curi 10 April 2011 10:18AM
People think A&B is more likely than A alone, if you ask the right question. That's not very Bayesian; as far as you Bayesians can tell it's really quite stupid.
Is that maybe evidence that Bayesianism is faililng to model how people actually thinking?
Popperian philosophy can make sense of this (without hating on everyone! it's not good to hate on people when there's better options available). It explains it like this: people like explanations. When you say "A happened because B happened" it sounds to them like a pretty good explanatory theory which makes sense. When you say "A alone" they don't see any explanation and they read it as "A happened for no apparent reason" which is a bad explanation, so they score it worse.
To concretize this, you could use A = economic collapse and B = nuclear war.
People are looking for good explanations. They are thinking in a Popperian fashion.
Isn't it weird how you guys talk about all these biases which basically consist of people not thinking in the way you think they should, but when someone says "hey, actually they think in this way Popper worked out" you think that's crazy cause the Bayesian model must be correct? Why did you find all these counter examples to your own theory and then never notice they mean your theory is wrong? In the cases where people don't think in a Popperian way, Popper explains why (mostly b/c of the justificationist tradition informing many mistakes since Aristotle)
Scope Insensitivity - The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.
Changing the number does not change most of the explanations involved, such as why helping birds is good, what the person can afford to spare, how much charity it takes the person to feel altruistic enough (or moral enough, involved enough, helpful enough, whatever), etc... Since the major explanatory factors they were considering don't change in proportion to the number of birds, their answer doesn't change proportionally either.
Correspondence Bias, also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance.
This happens because people usually know the explanations/excuses for why they did stuff, but they don't know them for others. And they have more reason to think of them for themselves.
Confirmation bias, or Positive Bias is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
People do this because of the justificationist tradition, dating back to Aristotle, which Bayesian epistemology is part of, and which Popper rejected. This is a way people really don't think in the Popperian way -- but they could and should.
Planning Fallacy - We tend to plan envisioning that everything will go as expected. Even assuming that such an estimate is accurate conditional on everything going as expected, things will not go as expected. As a result, we routinely see outcomes worse then the ex ante worst case scenario.
This is also caused by the justificationist tradition, which Bayesian epistemology is part of. It's not fallibilist enough. This is a way people really don't think in the Popperian way -- but they could and should.
Well, that's part of the issue. The other part is they come up with a good explanation of what will happen, and they go with that. That part of their thinking fits what Popper said people do. The problem is not enough criticism, which is from the popularity of justificationism.
Do We Believe Everything We're Told? - Some experiments on priming suggest that mere exposure to a view is enough to get one to passively accept it, at least until it is specifically rejected.
That's very Popperian. The Popperian way is that you can make conjectures however you want, and you only reject them if there's a criticism. No criticism, no rejection. This contrasts with the justificationist approach in which ideas are required to (impossibly) have positive support, and the focus is on positive support not criticism (thus causing, e.g., Confirmation Bias)
Illusion of Transparency - Everyone knows what their own words mean, but experiments have confirmed that we systematically overestimate how much sense we are making to others.
This one is off topic but there's several things I wanted to say. First, people don't always know what their own words mean. People talking about tricky concepts like God, qualia, or consciousness often can't explain what they mean by the words if asked. Sometimes people even use words without knowing the definition, they just heard it in a similar circumstance another time or something.
The reason others don't understand us, often, is because of the nature of communication. To communicate what has to happen is the other person creates knoweldge of what idea(s) you are trying to express to him. That means he has to make guesses about what you are saying and use criticisms to improve those guesses (e.g. by ruling stuff out incompatible with the words he heard you use). In this way Popperian epistemology lets us understand communication, and why it's so hard.
Evaluability - It's difficult for humans to evaluate an option except in comparison to other options. Poor decisions result when a poor category for comparison is used. Includes an application for cheap gift-shopping.
It's because they are trying to come up with a good explanation of what to buy. And "this one is better than this other one" is a pretty simple and easily available kind of explanation to create.
The Allais Paradox (and subsequent followups) - Offered choices between gambles, people make decision-theoretically inconsistent decisions.
How do you know that kind of thing and still think people reason in a Bayesian way? They don't. They just guess at what to gamble, and the quality of the guesses is limited by what criticisms they use. If they dont' know much math then they don't subject their guesses to much mathematical criticism. Hence this mistake.

View more: Next