Meetup : Toronto - What's all this about Bayesian Probability and stuff?!

4 Giles 15 February 2013 05:28PM

Discussion article for the meetup : Toronto - What's all this about Bayesian Probability and stuff?!

WHEN: 19 February 2013 08:00:00PM (-0500)

WHERE: 54 Dundas St E, Toronto, ON

Place: Upstairs at The Imperial Public Library 54 Dundas St. E, near Dundas Station. Enter at the door on the right marked "library", go upstairs and look for the paperclip sign.

A lot of us enjoy reading the Less Wrong Sequences:

http://wiki.lesswrong.com/wiki/Sequences

...but there's a lot there that might seem confusing. Do we really try to live our lives according to a mathematical formula? Do we really hate frequentist statistics? Who are these mysterious organizations with their logos at the top of the page? What's the deal with the Harry Potter and My Little Pony stuff? How much of that math and science and philosophy is relevant to our daily lives?

This meetup is a chance to try and clear up some of these confusions.

It's also a chance for each of us to explain: what does rationality mean to you? What do you imagine a rational person as being like?

Discussion article for the meetup : Toronto - What's all this about Bayesian Probability and stuff?!

Meetup : Toronto - Rational Debate: Will Rationality Make You Rich? ... and other topics

6 Giles 11 February 2013 01:12AM

Discussion article for the meetup : Rational Debate: Will Rationality Make You Rich? ... and other topics

WHEN: 12 February 2013 08:00:00PM (-0500)

WHERE: 54 Dundas St E, Toronto, ON

Place: Upstairs at The Imperial Public Library 54 Dundas St. E, near Dundas Station. Enter at the door on the right marked "library", go upstairs and look for the paperclip sign.

We'll kick the meeting off with ASK LESS WRONG. Think of something in your everyday life that's bothering you and we'll help you smooth it out. Purpose: increase the fun in each others' lives through the magic of friendship. Secondary purpose: train ourselves to notice things that are suboptimal and view them as problems that can be solved.

The main part of the meeting will be a RATIONAL DEBATE. We'll start with "will rationality make you rich", then move on to "is there intelligent life elsewhere in the universe" and "should you vote". That's probably all we'll have time for before the beer kicks in, but we do have backup topics.

If you want to read up on any of these topics, that's great - but not strictly necessary.

Rational debating is far from a solved problem, so we'll be learning how to do it as we go along. I'll be chairing, so don't worry about keeping track of this vast list of meta stuff - that's my job. It'll go something like this:

  • In a conventional debate, you win by sounding more plausible than the other person. In a rational debate, you win if and only if you end up believing the truth. This makes it a cooperative game - it's possible for everyone to win or for everyone to lose. (Incidentally it also means you don't actually know whether you've won or not).

  • Initially, each person answers the question separately, choosing how they wish to frame their answer. If people come up with very different ways of framing the question, we will take each one in turn and try to approach the question from that direction. (The point of this is to avoid fighting over the framing of the discussion and instead address the issues directly).

  • I'll keep track of structural stuff - different ways of framing the question, agreed subtopics of discussion, and binary chopping to find points of disagreement (which involves lists of statements and verbally how plausible we each think they are).

  • When arguing against something, construct a steel man first - rephrase the opposing argument in your own words, making it as strong and plausible as you can, before you try and defeat it.

  • Be bold and specific - make sure you're saying something substantial, even if you're not completely sure it's true.

  • The social aspect: make sure we're providing status and rewards for the right things.

  • Leave a line of retreat. What would I do if I was wrong about this?

  • Try to notice when you're replying to somebody's cached thought with a cached thought of your own. I'll try and do the same.

  • Try to find something to change your mind about, even if it's something small.

  • Separate out disagreement about facts from disagreement about values (and disagreement about strategy, which combines both). Separate out semantic confusion. I think we're already reasonably good at these.

  • If possible, identify which of these techniques you're trying to put into practice. I'll do the same. (By drawing attention to this we'll help keep things purposeful, and also hopefully learn which techniques seem particularly useful).

Resources on rational debate:

http://lesswrong.com/lw/85h/better_disagreement/

http://lesswrong.com/lw/o4/leave_a_line_of_retreat/

http://lesswrong.com/lw/gm9/philosophical_landmines/

Hope to see you all at the Imperial Pub on Tuesday! Let me know if you can come.

Discussion article for the meetup : Rational Debate: Will Rationality Make You Rich? ... and other topics

Comment author: Giles 09 February 2013 06:17:21AM 34 points [-]

I love the landmine metaphor - it blows up in your face and it's left over from some ancient war.

Comment author: Qiaochu_Yuan 01 February 2013 06:08:33PM 34 points [-]

Things that are your fault are good because they can be fixed. If they're someone else's fault, you have to fix them, and that's much harder.

-- Geoff Anders (paraphrased)

Comment author: Giles 09 February 2013 04:34:03AM 4 points [-]

Did he mean if they're someone else's fault then you have to fix the person?

Comment author: Qiaochu_Yuan 07 February 2013 06:02:50PM 5 points [-]

Do people update far more strongly on evidence if it comes from their own lab?

This isn't a completely unreasonable thing to do. For one thing, you have much more knowledge about the methodology of experiments conducted in your lab.

Comment author: Giles 07 February 2013 06:03:51PM 5 points [-]

You also know your own results aren't fraudulent.

Comment author: Giles 07 February 2013 05:58:06PM 3 points [-]

That experiment has changed Latham's opinion of priming and has him wondering now about the applications for unconscious primes in our daily lives.

He seems to have skipped right over the part where he wonders why he and Bargh see one thing and other people see something different. Do people update far more strongly on evidence if it comes from their own lab?

Also, yay priming! (I don't want this comment to sound negative about priming as such)

Comment author: Giles 06 February 2013 07:19:38PM *  1 point [-]

2 sounds wrong to me - like you're trying to explain why having a consistent internal belief structure is important to someone who already believes that.

The things which would occur to me are:

  • If both of you are having reactions like this then you're dealing with status, in-group and out-group stuff, taking offense, etc. If you can make it not be about that and be about the philosophical issues - if you can both get curious - then that's great. But I don't know how to make that happen.
  • Does your friend actually have any contradictory beliefs? Do they believe that they do?
  • You could escalate - point out every time your friend applies a math thing to social justice. "2000 people? That's counting. You're applying a math thing there." "You think this is better than that? That's called a partial ordering and it's a math thing". I'm not sure I'd recommend this approach though.
Comment author: shminux 06 February 2013 05:15:13PM 10 points [-]

I suspect that what frustrated you is not noticing your own confusion. You clearly had a case of lost purposes: "applying a math thing to social justice" is instrumental, not terminal. You discovered a belief "applying math is always a good thing" which is not obviously connected to your terminal goal "social justice is a good thing".

You are rationalizing your belief about applying math in your point 2:

An inconsistent belief system will generate actions that are oriented towards non-constant goals, and interfere destructively with each other, and not make much progress. A consistent belief system will generate many actions oriented towards the same goal, and so will make much progress.

How do you know that? Seems like an argument you have invented on the spot to justify your entrenched position. Your point 3 confirms it:

No matter how offended you are about something, thinking about it will still resolve the issue.

In other words, you resolved your cognitive dissonance by believing the argument you invented, without any updating.

If you feel like thinking about the issue some more, consider connecting your floating belief "math is good" to something grounded, like The Useful Idea of Truth:

True beliefs are more likely than false beliefs to make correct experimental predictions, so if we increase our credence in hypotheses that make correct experimental predictions, our model of reality should become incrementally more true over time.

This is reasonably uncontroversial, so the next step would be to ponder whether in order to be better at this social justice thing one has to be better at modeling reality. If so, you can proceed to the argument that a consistent model is better than an inconsistent one at this task. This may appear self-evident to you, but not necessarily to your "socially progressive" friend. Can you make a convincing case for it? What if s/he comes up with examples where someone following an inconsistent model (like, say, Mother Teresa) contributes more to social justice than those who study the issue for a living? Would you accept their evidence as a falsification of your meta-model "logical consistency is essential"? If not, why not?

Comment author: Giles 06 February 2013 06:50:52PM 1 point [-]

This may appear self-evident to you, but not necessarily to your "socially progressive" friend. Can you make a convincing case for it?

Remember you have to make a convincing case without using stuff like logic

Comment author: Valentine 05 February 2013 08:10:18AM 1 point [-]

Does LW have an anti-publication-bias registry somewhere?

Not that I know of, but that does sound quite awesome.

..I'll be attending the March workshop.

I look forward to meeting you, Giles!

Comment author: Giles 05 February 2013 02:34:52PM 1 point [-]

Not that I know of

Any advice on how to set one up? In particular how to add entries to it retrospectively - I was thinking about searching the comments database for things like "I intend to", "guard against", "publication bias" etc. and manually find the relevant ones. This is somewhat laborious, but the effect I want to avoid is "oh I've just finished my write-up (or am just about to), now I'll go and add the original comment to the anti-publication bias registry".

On the other hand it seems like anyone can safely add anyone else's comment to the registry as long as it's close enough in time to when the comment was written.

Any advice? (I figured if you're involved at CFAR you might know a bit about this stuff).

Comment author: Giles 04 February 2013 10:44:00PM 8 points [-]

This is interesting. People who are vulnerable to the donor illusion either have some of their money turned into utilons, or are taught a valuable lesson about the donor illusion, possibly creating more utilons in the long term.

View more: Prev | Next