Reading group for Yudkowsky's "Rationality: AI to Zombies", which is basically an organized and updated version of the Sequences from LW (see http://wiki.lesswrong.com/wiki/Sequences).
The group meets to discuss the topics in the book, how to apply and benefit from them, and related topics in areas like cognitive biases, applied rationality, and effective altruism. You can get a copy of the book here: https://intelligence.org/rationality-ai-zombies/
The reading list for this week is five topics from the "Against Rationalization" section of Book II, "How To Actually Change Your Mind". They are (actually 76-80, LW's auto-formatting is screwing it up):
- Fake Justification
- Is That Your True Rejection?
- Entangled Truths, Contagious Lies
- Of Lies and Black Swan Blowups
- Dark Side Epistemology
We previously covered the "Map and territory" sequence (and previous parts of "How To Actually Change Your Mind"), but please don't feel a need to have read everything up to this point to participate in the group.
Event is also on Facebook: https://www.facebook.com/events/962791670440258/
We're meeting on the 5th floor. If you show up and the door into the room is locked, knock and look around for us elsewhere on the fifth floor if nobody answers. If the doors to the building are locked, try the other ones and see if you can tailgate in. If the doors are, in fact, locked, we'll try to have somebody to let people in.
There's usually snacks at the meetup, though feel free to bring something. We usually get dinner afterward, around 9PM or so.
While I fully agree with the principle of the article, something stuck out to me about your comment:
What I noticed was that you were basically defining a universal prior for beliefs, as much more likely false than true. From what I've read about Bayesian analysis, a universal prior is nearly undefinable, so after thinking about it a while, I came up with this basic counterargument:
You say that true beliefs are vastly outnumbered by false beliefs, but I say, how could you know of the existence of all these false beliefs, unless each one had a converse, a true belief opposing it that you first had some evidence for? For otherwise, you wouldn't know whether it was true or false.
You may then say that most true beliefs don't just have a converse. They also have many related false beliefs opposing them. But I would say, those are merely the converses that spring from the connections of that true belief with its many related true beliefs.
By this, I hope I've offered evidence that a fifty-fifty universal T/F prior is at least as likely as one considering most unconsidered ideas to be false. (And I would describe my further thoughts if I thought they would be useful here, but, silly me, I'm replying to a post from almost 8 years ago.)
I don't think "converse" is the word you're looking for here - possibly "complement" or "negation" in the sense that (A || ~A) is true for all A - but I get what you're saying. Converse might even be the right word for that; vocabulary is not my forte.
If you take the statement "most beliefs are false" as given, then "the negation of most beliefs is true" is trivially true but adds no new information. You're treating positive and negative beliefs as though they're the same, and that's absolutely not true. In the words of this post, a positive belief provides enough information to anticipate an experience. A negative belief does not (assuming there are more than two possible beliefs). If you define "anything except that one specific experience" as "an experience", then you can define a negative belief as a belief, but at that point I think you're actually falling into exactly the trap expressed here.
If you replace "belief" with "statement that is mutually incompatible with all other possible statements that provide the same amount of information about its category" (which is a possibly-too-narrow alternative; unpacking words is hard sometimes) then "true statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category are vastly outnumbered by false statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category" is something the I anticipate you would find true. You and Eliezer do not anticipate a different percentage of possible "statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category" being true.
As for universal priors, the existence of many incompatible possible (positive) beliefs in one space (such that only one can be true) gives a strong prior that any given such belief is false. If I have only two possible beliefs and no other information about them, then it only takes one bit of evidence - enough to rule out half the options - to decide which belief is likely true. If I have 1024 possible beliefs and no other evidence, it takes 10 bits of evidence to decide which is true. If I conduct an experiment that finds that belief 216 +/- 16 is true, I've narrowed my range of options from 1024 to 33, a gain of just less than 5 bits of evidence. Ruling out one more option gives the last of that 5th bit. You might think that eliminating ~96.8% of the possible options sounds good, but it's only half of the necessary evidence. I'd need to perform another experiment that can eliminate just as large a percentage of the remaining values to determine the correct belief.