Wiki Contributions

Comments

Moderately true of Seattle as well (two group houses, plus some people living as housemates or whatever but not explicitly a Rationalist Group House). I'm not sure if our community is big enough for something like this but I love this idea and it would be a point in favor of moving the bay area if there was one there (that I had a chance to move into) but not one here.

Hell, it's not even just the bay area; Seattle has two explicitly-rationalist-group-houses and plenty of other people who live in more "normal" situations but with other rationalists (I found my current flatmate, when my old one moved out, through the community). Certainly the bay area rationalist community is large and this sort of living situation is far from universal, but I've certainly heard of several even though I've never actually visited any.

Gah, thank you, edited. Markdown is my nemesis.

Agreed that the above won't work for all people, not even all people who say

I haven't and probably can't internalize it on a very deep, systematic level, no matter how many times I re-read the articles

Nonetheless, I find it a useful thing to consider, both because it's a lot easier (even if there isn't yet such a group in your area) than writing an entire LW-inspired rationality textbook, and because it's something that a person can arrange without needing to have already internalized everything (which might be a prerequisite for the "write the textbook" approach). It also provides a lot of benefits that go well beyond solving the specific problem of internalizing the material (I have also discovered new material I would not have found as early if at all, I have engaged in discussions related to the readings that caused me to update other beliefs, I have formed a new social circle of people with whom I can discuss topics with in a manner that none of my other circles support, etc.).

For what it's worth, I got relatively little[1] out of reading the Sequences solo, in any form (and RAZ is worse than LW in this regard, because the comments were worth something even on really old and inactive threads, and surprisingly many threads were still active when I first joined the site in 2014).

What really did the job for me was the reading group started by another then-Seattleite[2]. We started as a small group (I forget how many people the first meetings had, but it was a while before we broke 10 and longer before we did it regularly) that simply worked through the core sequences - Map & Territory, then How to Actually Change Your Mind - in order (as determined by posts on the sequences themselves at first, and later by the order of Rationality: AI to Zombies chapters). Each week, we'd read the next 4-6 posts (generally adjusted for length) and then meet for roughly 90 minutes to talk about them in groups of 4-8 (as more people started coming, we began splitting up for the discussions). Then we'd (mostly) all go to dinner together, at which we'd talk about anything - the reading topics, other Rationality-esque things, or anything else a group of smart mostly-20-somethings might chat about - and next week we'd do it again.

If there's such a group near you, go to it! If not, try to get it started. Starting one of these groups is non-trivial. I was already considering the idea before I met the person who actually made it happen (and I met her through OKCupid, not LessWrong or the local rationality/EA community), but I wouldn't have done it anywhere near as well as she did. On the other hand, maybe you have the skills and connections (she did) and just need the encouragement. Or maybe you know somebody else who has what it takes, and need to go encourage them.

[1] Reading the Sequences by myself, the concepts were very "slippery"; I might have technically remembered them, but I didn't internalize them. If there was anything I disagreed with or that seemed unrealistic - and this wasn't so very uncommon - it made me discount the whole post to effectively nothing. Even when something seemed totally, brilliantly true, it also felt untested to me, because I hadn't talked about it with anybody. Going to the group fixed all of that. While it's not really what you're asking for, you may find it does the trick.

[2] She has since moved to (of course) the Bay Area. Nonetheless, the group continues (and is roughly now two years running, hitting nearly every Monday evening). We regularly break 20 attendees now, occasionally break 30, and the "get dinner together" follow-up has grown into a regularly-scheduled weekly event in its own right at one of the local rationalist houses.

I don't think "converse" is the word you're looking for here - possibly "complement" or "negation" in the sense that (A || ~A) is true for all A - but I get what you're saying. Converse might even be the right word for that; vocabulary is not my forte.

If you take the statement "most beliefs are false" as given, then "the negation of most beliefs is true" is trivially true but adds no new information. You're treating positive and negative beliefs as though they're the same, and that's absolutely not true. In the words of this post, a positive belief provides enough information to anticipate an experience. A negative belief does not (assuming there are more than two possible beliefs). If you define "anything except that one specific experience" as "an experience", then you can define a negative belief as a belief, but at that point I think you're actually falling into exactly the trap expressed here.

If you replace "belief" with "statement that is mutually incompatible with all other possible statements that provide the same amount of information about its category" (which is a possibly-too-narrow alternative; unpacking words is hard sometimes) then "true statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category are vastly outnumbered by false statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category" is something the I anticipate you would find true. You and Eliezer do not anticipate a different percentage of possible "statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category" being true.

As for universal priors, the existence of many incompatible possible (positive) beliefs in one space (such that only one can be true) gives a strong prior that any given such belief is false. If I have only two possible beliefs and no other information about them, then it only takes one bit of evidence - enough to rule out half the options - to decide which belief is likely true. If I have 1024 possible beliefs and no other evidence, it takes 10 bits of evidence to decide which is true. If I conduct an experiment that finds that belief 216 +/- 16 is true, I've narrowed my range of options from 1024 to 33, a gain of just less than 5 bits of evidence. Ruling out one more option gives the last of that 5th bit. You might think that eliminating ~96.8% of the possible options sounds good, but it's only half of the necessary evidence. I'd need to perform another experiment that can eliminate just as large a percentage of the remaining values to determine the correct belief.

Replying loooong after the fact (as you did, for that matter) but I think that's exactly the problem that the post is talking about. In logical terms, one can define a category "human" such that it carries an implication "mortal", but if one does that, one can't add things to this category until determining that they conform to the implication.

The problem is, the vast majority of people don't think that way. They automatically recognize "natural" categories (including, sometimes, of unnatural things that appear similar), and they assign properties to the members of those categories, and then they assume things about objects purely on the bases of appearing to belong to that category.

Suppose you encountered a divine manifestation, or a android with a fully-redundant remote copy of its "brain", or a really excellent hologram, or some other entity that presented as human but was by no conventional definition of the word "mortal". You would expect that, if shot in the head with a high-caliber rifle, it would die; that's what happens to humans. You would even, after seeing it get shot, fall over, stop breathing, cease to have a visible pulse, and so forth, conclude that it is dead.. You probably wouldn't ask this seeming corpse "are you dead?", nor would you attempt to scan its head for brain activity (medically defining "dead" today is a little tricky, but "no brain activity at all" seems like a reasonable bar).

All of this is reasonable; you have no reason to expect immortal beings walking among us, or non-breathing headshot victims to be capable of speech, or anything else of that nature. These assumptions go so deep that it is hard to even say where they come from, other than "I've never heard of that outside of fiction" (which is an imperfect heurisitic; I learn of things I'd never heard about every day, and I even encountered some of the concepts in fiction before learning they really exist). Nobody acknowledges that it's a heuristic, though, and that can lead to making incorrect assumptions that should be consciously avoided when there's time to consider the situation.

@Caledonian2 said "If Socrates meets all the necessary criteria for identification as human, we do not need to observe his mortality to conclude that he is mortal.", but this statement is self-contradictory unless the implication "human" -> "mortal" is logically false. Otherwise, mortality itself is part of "the necessary criteria for identification as human".

Agreed. "Torture" as a concept doesn't describe any particular experience, so you can't put a specific pain level to it. Waterboarding puts somebody in fear for their life and evokes very well-ingrained terror triggers in our brain, but doesn't really involve pain (to the best of my knowledge). Branding somebody with a glowing metal rod would cause a large amount of pain, but I don't know how much - it probably depends in the size, location, and so on anyhow - and something very like this on a small scale this can be done as a medical operation to sterilize a wound or similar. Tearing off somebody's finger- and toenails is said to be an effective torture, and I can believe it, but it can also happen fairly painlessly in the ordinary turn of events; I once lost a toenail and didn't even notice until something touched where it should have been (though I'd been exercising, which suppresses pain to a degree).

If you want to know how painful it is to, say, endure the rack, I can only say I hope nobody alive today knows. Same if you want to know the pain level where an average person loses the ability to effectively defy a questioner, or anything like that...

I haven't investigated selling it, but up to a certain multiple of my annual salary it's included in my benefits and there is no value in setting it lower than that value; I wouldn't get any extra money.

This is a fairly standard benefit from tech companies (and others that have good benefits packages in the US), apparently. It feels odd but it's been like this at the last few companies I worked for, differing only in the insurance provider whose policy is used and the actual limit before you'd need to pay extra.

Nitpick: The article talks about a rabbit kidney, not a mouse one

It also isn't entirely clear how cold the kidney got, or how long it was stored. It's evidence in favor of "at death" cryonics, but I'm not sure how strong of evidence it is. Also, it's possible to survive with substantially more kidney damage than you would even want to incur as brain damage.

Many employers provide life insurance. I've always thought that was kind of weird (but then, all of life insurance is weird; it's more properly "death insurance" anyhow) but it's a think. My current employer provides (at no cost to me) a life insurance policy sufficient to pay for cryonics. It would currently be given charitably - I have no dependents and my family is reasonably well off - but I've considered changing that.

Load More