Meetup : Seattle Rationality Reading Group

0 CBHacking 24 February 2016 09:12PM

Discussion article for the meetup : Seattle Rationality Reading Group

WHEN: 29 February 2016 06:30:00PM (-0800)

WHERE: Paul G. Allen Center, 185 Stevens Way, Seattle, Washington

Come meet other Seattle-area aspiring rationalists to discuss the week's reading, learn rationality techniques, and have a good time. This is a weekly meetup, meeting on the 5th floor of the Paul Allen Center (UW computer science building), often in room 503. Discussion will start at 6:45.

This week's Facebook event is https://www.facebook.com/events/1706167056307601/. To see future events, consider joining the Seattle Rationality group, https://www.facebook.com/groups/seattlerationality/.

While doing the reading beforehand is recommended, it is not required. We are currently working on the Human's Guide To Words sequence (part of The Machine in the Ghost), with added content from SlateStarCodex. Previous reading included the Map and Territory & How to Actually Change Your Mind sequences.

Recommended reading:

http://lesswrong.com/lw/o1/entropy_and_short_codes/

http://lesswrong.com/lw/o2/mutual_information_and_density_in_thingspace/

http://lesswrong.com/lw/6kx/wanting_vs_liking_revisited/

http://lesswrong.com/lw/6kf/prospect_theory_a_framework_for_understanding/

Discussion article for the meetup : Seattle Rationality Reading Group

Comment author: Ty-Guy9 20 March 2015 09:05:27AM 0 points [-]

While I fully agree with the principle of the article, something stuck out to me about your comment:

In principle there are many true beliefs for which I have no evidence, but in practice I can never know what these true beliefs are, or even focus on them enough to think them explicitly, because they are so vastly outnumbered by false beliefs for which I can find no evidence.

What I noticed was that you were basically defining a universal prior for beliefs, as much more likely false than true. From what I've read about Bayesian analysis, a universal prior is nearly undefinable, so after thinking about it a while, I came up with this basic counterargument:

You say that true beliefs are vastly outnumbered by false beliefs, but I say, how could you know of the existence of all these false beliefs, unless each one had a converse, a true belief opposing it that you first had some evidence for? For otherwise, you wouldn't know whether it was true or false.

You may then say that most true beliefs don't just have a converse. They also have many related false beliefs opposing them. But I would say, those are merely the converses that spring from the connections of that true belief with its many related true beliefs.

By this, I hope I've offered evidence that a fifty-fifty universal T/F prior is at least as likely as one considering most unconsidered ideas to be false. (And I would describe my further thoughts if I thought they would be useful here, but, silly me, I'm replying to a post from almost 8 years ago.)

Comment author: CBHacking 18 January 2016 11:31:39PM 0 points [-]

I don't think "converse" is the word you're looking for here - possibly "complement" or "negation" in the sense that (A || ~A) is true for all A - but I get what you're saying. Converse might even be the right word for that; vocabulary is not my forte.

If you take the statement "most beliefs are false" as given, then "the negation of most beliefs is true" is trivially true but adds no new information. You're treating positive and negative beliefs as though they're the same, and that's absolutely not true. In the words of this post, a positive belief provides enough information to anticipate an experience. A negative belief does not (assuming there are more than two possible beliefs). If you define "anything except that one specific experience" as "an experience", then you can define a negative belief as a belief, but at that point I think you're actually falling into exactly the trap expressed here.

If you replace "belief" with "statement that is mutually incompatible with all other possible statements that provide the same amount of information about its category" (which is a possibly-too-narrow alternative; unpacking words is hard sometimes) then "true statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category are vastly outnumbered by false statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category" is something the I anticipate you would find true. You and Eliezer do not anticipate a different percentage of possible "statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category" being true.

As for universal priors, the existence of many incompatible possible (positive) beliefs in one space (such that only one can be true) gives a strong prior that any given such belief is false. If I have only two possible beliefs and no other information about them, then it only takes one bit of evidence - enough to rule out half the options - to decide which belief is likely true. If I have 1024 possible beliefs and no other evidence, it takes 10 bits of evidence to decide which is true. If I conduct an experiment that finds that belief 216 +/- 16 is true, I've narrowed my range of options from 1024 to 33, a gain of just less than 5 bits of evidence. Ruling out one more option gives the last of that 5th bit. You might think that eliminating ~96.8% of the possible options sounds good, but it's only half of the necessary evidence. I'd need to perform another experiment that can eliminate just as large a percentage of the remaining values to determine the correct belief.

Comment author: theguyfromoverthere 20 June 2014 05:26:59PM 0 points [-]

I think this is in the context of somebody insisting that Socrates is human so he must be mortal.

If you are trying to prove mortality by claiming he's human, then all humans must be mortal for you to assume this.

I agree, though, that, perhaps the statement was a little vague.

Comment author: CBHacking 04 January 2016 01:52:31AM 0 points [-]

Replying loooong after the fact (as you did, for that matter) but I think that's exactly the problem that the post is talking about. In logical terms, one can define a category "human" such that it carries an implication "mortal", but if one does that, one can't add things to this category until determining that they conform to the implication.

The problem is, the vast majority of people don't think that way. They automatically recognize "natural" categories (including, sometimes, of unnatural things that appear similar), and they assign properties to the members of those categories, and then they assume things about objects purely on the bases of appearing to belong to that category.

Suppose you encountered a divine manifestation, or a android with a fully-redundant remote copy of its "brain", or a really excellent hologram, or some other entity that presented as human but was by no conventional definition of the word "mortal". You would expect that, if shot in the head with a high-caliber rifle, it would die; that's what happens to humans. You would even, after seeing it get shot, fall over, stop breathing, cease to have a visible pulse, and so forth, conclude that it is dead.. You probably wouldn't ask this seeming corpse "are you dead?", nor would you attempt to scan its head for brain activity (medically defining "dead" today is a little tricky, but "no brain activity at all" seems like a reasonable bar).

All of this is reasonable; you have no reason to expect immortal beings walking among us, or non-breathing headshot victims to be capable of speech, or anything else of that nature. These assumptions go so deep that it is hard to even say where they come from, other than "I've never heard of that outside of fiction" (which is an imperfect heurisitic; I learn of things I'd never heard about every day, and I even encountered some of the concepts in fiction before learning they really exist). Nobody acknowledges that it's a heuristic, though, and that can lead to making incorrect assumptions that should be consciously avoided when there's time to consider the situation.

@Caledonian2 said "If Socrates meets all the necessary criteria for identification as human, we do not need to observe his mortality to conclude that he is mortal.", but this statement is self-contradictory unless the implication "human" -> "mortal" is logically false. Otherwise, mortality itself is part of "the necessary criteria for identification as human".

Comment author: [deleted] 02 November 2015 09:24:50PM 0 points [-]

What specific method of torture? I'd assume that many methods are designed to get as high as possible, but there are others that are much lower and instead involve other negative sensations besides pain.

In response to comment by [deleted] on Open thread, Nov. 02 - Nov. 08, 2015
Comment author: CBHacking 06 November 2015 01:31:01AM 0 points [-]

Agreed. "Torture" as a concept doesn't describe any particular experience, so you can't put a specific pain level to it. Waterboarding puts somebody in fear for their life and evokes very well-ingrained terror triggers in our brain, but doesn't really involve pain (to the best of my knowledge). Branding somebody with a glowing metal rod would cause a large amount of pain, but I don't know how much - it probably depends in the size, location, and so on anyhow - and something very like this on a small scale this can be done as a medical operation to sterilize a wound or similar. Tearing off somebody's finger- and toenails is said to be an effective torture, and I can believe it, but it can also happen fairly painlessly in the ordinary turn of events; I once lost a toenail and didn't even notice until something touched where it should have been (though I'd been exercising, which suppresses pain to a degree).

If you want to know how painful it is to, say, endure the rack, I can only say I hope nobody alive today knows. Same if you want to know the pain level where an average person loses the ability to effectively defy a questioner, or anything like that...

Meetup : Seattle Rationality Reading Group: 109-114

1 CBHacking 05 November 2015 03:09AM

Discussion article for the meetup : Seattle Rationality Reading Group: 109-114

WHEN: 09 November 2015 06:30:00PM (-0800)

WHERE: Paul G. Allen Center, 185 Stevens Way, Seattle, Washington

This is a weekly meetup in the Seattle area, currently reading Yudkowski's Rationality: From AI to Zombies. We are currently in the "Death Spirals and the Cult Attractor" sequence of "How to Actually Change Your Mind". See the Facebook event here: https://www.facebook.com/events/520808021411665/

The meetings involve discussions of the recent readings and related topics. They are typically around 2 - 2.5 hours long. Bringing snacks is welcome but by no means expected. We usually go for dinner on the Ave after the meetup. We are typically located in room 503.

It's not necessary to have read the material leading up to this week's readings (either in the book Rationality, or in the Sequences here on LW), but it may help. The items for this week are:

\109. Evaporative Cooling of Group Beliefs

\110. When None Dare Urge Restraint

\111. The Robbers Cave Experiment

\112. Every Cause Wants to Be a Cult

\113. Guardians of the Truth

\114. Guardians of the Gene Pool

Discussion article for the meetup : Seattle Rationality Reading Group: 109-114

Meetup : Rationality Reading Group (76-80)

1 CBHacking 23 August 2015 07:34AM

Discussion article for the meetup : Rationality Reading Group (76-80)

WHEN: 24 August 2015 06:30:00PM (-0700)

WHERE: Paul G. Allen Center (185 Stevens Way, Seattle, WA) Room 503

Reading group for Yudkowsky's "Rationality: AI to Zombies", which is basically an organized and updated version of the Sequences from LW (see http://wiki.lesswrong.com/wiki/Sequences).

The group meets to discuss the topics in the book, how to apply and benefit from them, and related topics in areas like cognitive biases, applied rationality, and effective altruism. You can get a copy of the book here: https://intelligence.org/rationality-ai-zombies/

The reading list for this week is five topics from the "Against Rationalization" section of Book II, "How To Actually Change Your Mind". They are (actually 76-80, LW's auto-formatting is screwing it up):

  1. Fake Justification
  2. Is That Your True Rejection?
  3. Entangled Truths, Contagious Lies
  4. Of Lies and Black Swan Blowups
  5. Dark Side Epistemology

We previously covered the "Map and territory" sequence (and previous parts of "How To Actually Change Your Mind"), but please don't feel a need to have read everything up to this point to participate in the group.

Event is also on Facebook: https://www.facebook.com/events/962791670440258/

We're meeting on the 5th floor. If you show up and the door into the room is locked, knock and look around for us elsewhere on the fifth floor if nobody answers. If the doors to the building are locked, try the other ones and see if you can tailgate in. If the doors are, in fact, locked, we'll try to have somebody to let people in.

There's usually snacks at the meetup, though feel free to bring something. We usually get dinner afterward, around 9PM or so.

Discussion article for the meetup : Rationality Reading Group (76-80)

Comment author: ike 09 August 2015 03:22:18AM 0 points [-]

That doesn't change the situation much, if you can sell it (or get higher pay for refusing it). If you somehow can't extract value from it (doubtful unless there are laws against selling), then it's relevant.

Comment author: CBHacking 21 August 2015 09:42:43AM 0 points [-]

I haven't investigated selling it, but up to a certain multiple of my annual salary it's included in my benefits and there is no value in setting it lower than that value; I wouldn't get any extra money.

This is a fairly standard benefit from tech companies (and others that have good benefits packages in the US), apparently. It feels odd but it's been like this at the last few companies I worked for, differing only in the insurance provider whose policy is used and the actual limit before you'd need to pay extra.

Comment author: AndreInfante 05 August 2015 11:30:58PM 5 points [-]

Technically, it's the frogs and fish that routinely freeze through the winter. Of course, they evolved to pull off that stunt, so it's less impressive.

We've cryopreserved a whole mouse kidney before, and were able to thaw and use it as a mouse's sole kidney.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2781097/

We've also shown that nematode memory can survive cryopreservation:

http://www.dailymail.co.uk/sciencetech/article-3107805/Could-brains-stay-forever-young-Memories-survive-cryogenic-preservation-study-shows.html

The trouble is that larger chunks of tissue (like, say, a whole mouse or a human brain) are more prone to thermal cracking at very low temperatures. Until we solve that problem, nobody's coming back short of brain emulation or nanotechnology.

Comment author: CBHacking 08 August 2015 09:38:46AM 1 point [-]

Nitpick: The article talks about a rabbit kidney, not a mouse one

It also isn't entirely clear how cold the kidney got, or how long it was stored. It's evidence in favor of "at death" cryonics, but I'm not sure how strong of evidence it is. Also, it's possible to survive with substantially more kidney damage than you would even want to incur as brain damage.

Comment author: ike 04 August 2015 01:21:50PM 0 points [-]

And you won't have any other use for that money when you're dead.

That's assuming you'd otherwise die with enough money to pay for it, and neglecting fees that need to be paid while alive. "Life insurance" doesn't solve this, you still need to pay for it.

Comment author: CBHacking 08 August 2015 09:26:29AM 1 point [-]

Many employers provide life insurance. I've always thought that was kind of weird (but then, all of life insurance is weird; it's more properly "death insurance" anyhow) but it's a think. My current employer provides (at no cost to me) a life insurance policy sufficient to pay for cryonics. It would currently be given charitably - I have no dependents and my family is reasonably well off - but I've considered changing that.

Meetup : Rationality Reading Group (71-75)

1 CBHacking 08 August 2015 08:45AM

Discussion article for the meetup : Rationality Reading Group (71-75)

WHEN: 10 August 2015 06:30:00PM (-0700)

WHERE: Paul G. Allen Center (185 Stevens Way, Seattle, WA) Room 503

Reading group for Yudkowsky's "Rationality: AI to Zombies", which is basically an organized and updated version of the Sequences from LW (see http://wiki.lesswrong.com/wiki/Sequences).

The group meets to discuss the topics in the book, how to apply and benefit from them, and related topics in areas like cognitive biases, applied rationality, and effective altruism. You can get a copy of the book here: https://intelligence.org/rationality-ai-zombies/

The reading list for this week is five topics from the "Against Rationalization" section of Book II, "How To Actually Change Your Mind". They are (actually 71-75, LW's auto-formatting is screwing it up):

  1. What Evidence Filtered Evidence?

  2. Rationalization

  3. A Rational Argument

  4. Avoiding Your Belief’s Real Weak Points

  5. Motivated Stopping and Motivated Continuation

We previously covered the "Map and territory" sequence a few months ago, but please don't feel a need to have read everything up to this point to participate in the group.

Event is also on Facebook: https://www.facebook.com/events/1008817309162312/

We're meeting on the 5th floor. If you show up and the door into the room is locked, knock and look around for us elsewhere on the fifth floor if nobody answers. If the doors to the building are locked, try the other ones and see if you can tailgate in. If the doors are, in fact, locked, we'll try to have somebody to let people in.

There's usually snacks at the meetup, though feel free to bring something. We usually get dinner afterward, around 9PM or so.

Discussion article for the meetup : Rationality Reading Group (71-75)

View more: Next