jimrandomh comments on How To Lose 100 Karma In 6 Hours -- What Just Happened - Less Wrong

-31 Post author: waitingforgodel 10 December 2010 08:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (214)

You are viewing a single comment's thread. Show more comments above.

Comment author: jimrandomh 10 December 2010 01:24:06PM *  9 points [-]

Is there mostly a single way how groups gradually turn into cults, or does it vary a lot?

Yes, there is. One of the key features of cults is that they make their members sever all social ties to people outside the cult, so that they lose the safeguard of friends and family who can see what's happening and pull them out if necessary. Sci*****ogy was doing that from the very beginning, and Less Wrong has never done anything like that.

Comment author: David_Gerard 10 December 2010 02:15:12PM *  10 points [-]

Not all, just enough. Weakening their mental ties so they get their social calibration from the small group is the key point. But that's just detail, you've nailed the biggie. Good one.

and Less Wrong has never done anything like that.

SIAI staff will have learnt to think in ways that are hard to calibrate against the outside world (singularitarian ideas, home-brewed decision theories). Also, they're working on a project they think is really important. Also, they have information they can't tell everyone (e.g. things they consider decision-theoretic basilisks). So there's a few untoward forces there. As I said, hope they all have their wits about them.

/me makes mental note to reread piles of stuff on Scientology. I wonder who would be a good consulting expert, i.e. more than me.

Comment author: Anonymous6004 10 December 2010 03:25:21PM 5 points [-]

Not all, just enough. Weakening their mental ties so they get their social calibration from the small group is the key point.

No, it's much more than that. Scientology makes its members cut off communication with their former friends and families entirely. They also have a ritualized training procedure in which an examiner repeatedly tries to provoke them, and they have to avoid producing a detectable response on an "e-meter" (which measures stress response). After doing this for awhile, they learn to remain calm under the most extreme circumstances and not react. And so when Scientology's leaders abuse them in terrible ways and commit horrible crimes, they continue to remain calm and not react.

Cults tear down members' defenses and smash their moral compasses. Less Wrong does the exact opposite.

Comment author: Vaniver 10 December 2010 05:57:14PM *  4 points [-]

Cults tear down members' defenses and smash their moral compasses. Less Wrong does the exact opposite.

What defense against EY does EY strengthen? Because I'm somewhat surprised by the amount I hear Aumann's Agreement Theorem bandied around with regards to what is clearly a mistake on EY's part.

Comment author: David_Gerard 10 December 2010 03:58:26PM *  4 points [-]

I was talking generally, not about Scientology in particular.

As I noted, Scientology is such a toweringly bad idea that it makes other bad ideas seem relatively benign. There are lots of cultish groups that are nowhere near as bad as Scientology, but that doesn't make them just fine. Beware of this error. (Useful way to avoid it: don't use Scientology as a comparison in your reasoning.)

Comment author: TheOtherDave 10 December 2010 04:07:42PM 3 points [-]

But that error isn't nearly as bad as accidentally violating containment procedures when handling virulent pathogens, so really, what is there to worry about?

(ducks)

Comment author: David_Gerard 10 December 2010 04:08:57PM 1 point [-]

The forbidden topic, obviously.

Comment author: taw 10 December 2010 04:44:53PM 3 points [-]

No, it's much more than that. Scientology makes its members cut off communication with their former friends and families entirely.

I'd like to see some solid evidence for or against the claim that typical developing cults make their members cut off communication with their former friends and families entirely.

If the claim is of merely weakening these ties, then this is definitely happening. I especially mean commitment by signing up for cryonics. It will definitely increase mental distance between affected person and their formerly close friends and family, I guess about as much signing up for a weird religion but mostly perceived as benign would. I doubt anyone has much evidence about this demographics?

Comment author: David_Gerard 10 December 2010 05:06:35PM *  5 points [-]

I'd like to see some solid evidence for or against the claim that typical developing cults make their members cut off communication with their former friends and families entirely.

I don't think they necessarily make them - all that's needed is for the person to loosen the ties in their head, and strengthen them to the group.

An example is terrorist cells, which are small groups with a goal who have gone weird together. They may not cut themselves off from their families, but their bad idea has them enough that their social calibrator goes group-focused. I suspect this is part of why people who decompartmentalise toxic waste go funny. (I haven't worked out precisely how to get from the first to the second.)

There are small Christian churches that also go cultish in the same way. Note that in this case, the religious ideas are apparently mainstream - but there's enough weird stuff in the Blble to justify all manner of strangeness.

At some stage cohesion of the group becomes very important, possibly more important than the supposed point of the group. (I'm not sure how to measure that.)

I need to ask some people about this. Unfortunately, the real experts on cult thinking include several of the people currently going wildly idiotic about cryonics on the Rick Ross boards ... an example of overtraining on a bad experience and seeing a pattern where it isn't.

Comment author: taw 10 December 2010 07:42:57PM 6 points [-]

Regardless of actual chances of both working and considering the issue from purely sociological perspective - signing up for cryonics seems to be to be a lot like "accepting Jesus" / born again / or joining some far-more-religious-than-average subgroups of mainstream religions.

In both situations there's some underlying reasonably mainstream meme soup that is more or less accepted (Christianity / strict mind-brain correspondence) but which most people who accept it compartmentalize away. Then some groups decide not to compartmentalize it but accept consequences of their beliefs. It really doesn't take much more than that.

Disclaimers:

I'm probably in some top 25 posters by karma, but I tend to feel like an outsider here a lot.

The only "rationalist" idea from LW canon I take more or less seriously is the outside view, and the outside view says taking ideas too seriously tends to have horrible consequences most of the time. So I cannot even take outside view too seriously, by outside view - and indeed I have totally violated outside view's conclusions on several occasions, after careful consideration and fully aware of what I'm doing. Maybe I should write about it someday.

In my estimate all FAI / AI foom / nonstandard decision theories stuff is nothing but severe compartmentalization failure.

In my estimate cryonics will probably be feasible in some remote future, but right now costs of cryonics (very rarely honestly stated by proponents, backed by serious economic simulations instead of wishful thinking) are far too high and chances of it working now are far too slim to bother. I wouldn't even take it for free, as it would interfere with me being an organ donor, and that has non-negligible value for me. And even without that personal cost of added weirdness would probably be too high relative to my estimate of it working.

I can imagine alternative universes where cryonics makes sense, and I don't think people who take cryonics seriously are insane, I just think wishful thinking biases them. In non-zero but as far as I can tell very very tiny portion of possible future universes where cryonics turned out to work, well, enjoy your second life.

By the way, is there any reason for me to write articles expanding my points, or not really?

Comment author: multifoliaterose 10 December 2010 09:03:30PM *  3 points [-]

I'm probably in some top 25 posters by karma, but I tend to feel like an outsider here a lot.

My own situation is not so different although

(a) I have lower karma than you and

(b) There are some LW posters with whom I feel strong affinity

By the way, is there any reason for me to write articles expanding my points, or not really?

I myself am curious and would read what you had to say with interest and this is a weak indication that others would but of course it's for you to say whether it would be worth the opportunity cost. Probably the community would be more receptive to such pieces if they were cautious & carefully argued than if not; but this takes still more time and effort.

Comment author: taw 11 December 2010 12:09:28AM 9 points [-]

(a) I have lower karma than you

You get karma mostly for contributing more, not by higher quality. Posts and comments both have positive expected karma.

Also you get more karma for more alignment with groupthink. I even recall how in early days of lesswrong I stated based on very solid outside view evidence (from every single subreddit I've been to) that karma and reception will come to correlate with not only quality but also alignment with groupthink - that on reddit-style karma system downvoting-as-disagreement / upvoting-as-agreement becomes very significant at some point. People disagreed, but the outside view prevailed.

This unfortunately means that one needs to put a lot more effort into writing something that disagrees with groupthink than something that agrees with it - and such trivial inconveniences matter.

(b) There are some LW posters with whom I feel strong affinity

I don't think I feel particular "affinity" with anyone here, but I find many posters highly enjoyable to read and/or having a lot of insightful ideas.

I mostly write when I disagree with someone, so for a change (I don't hate everyone all the time, honestly :-p) here are two among the best writings by lesswrong posters I've ever read:

Comment author: David_Gerard 11 December 2010 01:51:15AM *  4 points [-]

I think it's a plus point that a contrarian comment will get upvotes for effort and showing its work (links, etc) - that is, the moderation method still seems to be "More like this please" than "Like". Being right and obnoxious gets downvotes.

(I think "Vote up" and "Vote down" might be profitably be replaced with "More like this" and "Less like this", but I don't think that's needed now and I doubt it'd work if it was needed.)

Comment author: taw 11 December 2010 07:56:00AM 2 points [-]

More like this/Less like this makes sense for top posts, but is it helpful for comments?

It's ok to keep an imperfect system - LW is nowhere near groupthink levels of subreddits or slashdot.

However - it seems to me that stealing HackerNews / Stackoverflow model of removing normal downvote, and keeping only upvote for comments (and report for spam/abuse, or possibly some highly restricted downvote for special situations only; or one which would count for a lot less than upvote) would reduce groupthink a lot, while keeping all major benefits of current system.

Other than "not fixing what ain't broken", are there any good reasons to keep downvote for comments? Low quality non-abusive coments will sink to the bottom just because of not getting upvotes, later reinforced by most people reading from highest rated first.

Disclaimers:

I'm obviously biased as a contrarian, and as someone who really likes reading a variety of contrarian opinions. I rarely bother posting comments saying that I totally agree with something. I occasionally send a private message with thanks when I read something particularly great, but I don't recall ever doing it here yet, even though a lot of posts were that kind of great.

And I fully admit that on several occasions I downvoted a good comment just because I though one below it was far better and deserving a lot of extra promotion. I always felt like I'm abusing the system this way. Is this common?

Comment author: multifoliaterose 11 December 2010 01:19:12AM *  2 points [-]

You get karma mostly for contributing more, not by higher quality. Posts and comments both have positive expected karma.

Yes, I've noticed this; it seems like there's a danger of there being an illusion that one is that one is actually getting something done by posting or commenting on LW on account of collecting karma by default.

On the upside I think that the net value of LW is positive so that (taking the outside view; ignoring the quality of particular posts/comments which is highly variable), the expected value of posts and comments is positive though probably less than one subjectively feels.

Also you get more karma for more alignment with groupthink [...]

Yes; I've noticed this too. A few months ago I came across Robin Hanson's Most Rationalists Are Elsewhere which is in similar spirit.

This unfortunately means that one needs to put a lot more effort into writing something that disagrees with groupthink than something that agrees with it - and such trivial inconveniences matter.

Agree here. In defense of LW I would say that this seems like a pretty generic feature across groups in general. I myself try to be careful about interpreting statements made by those with views that clash with my own charitably but don't know how well I succeed.

I mostly write when I disagree with someone, so for a change (I don't hate everyone all the time, honestly :-p)

Good to know :-)

Twilight fanfiction by Alicorn - it is ridiculously good, I guess a lot of people will avoid it because it's Twilight, but it would be a horrible mistake.

Fascinating; I had avoided it for this very reason but will plan on checking it out.

Contrarian excuses by Robin Hanson

Great article! I hadn't seen it before.

Comment author: taw 11 December 2010 07:18:17AM 0 points [-]

Agree here. In defense of LW I would say that this seems like a pretty generic feature across groups in general. I myself try to be careful about interpreting statements made by those with views that clash with my own charitably but don't know how well I succeed.

I don't consider LW particularly bad - it seems considerably saner than a typical internet forum of similar size. Level of drama seems a lot lower than is typical. Is my impression right that most of drama we get centers about obscure FAI stuff? I tend to ignore these posts unless I feel really bored. I've seen some drama about gender and politics, but honestly a lot less that these subject normally attract on other similar places.

Comment author: Alicorn 11 December 2010 12:22:57AM 2 points [-]

it is ridiculously good

Thank you! :D

Comment author: [deleted] 10 December 2010 10:21:40PM 1 point [-]

By the way, is there any reason for me to write articles expanding my points, or not really?

I'm just some random lurker, but I'd be very interested in these articles. I share your view on cryonics and would like to read some more clarification on what you mean by "compartmentalization failure" and some examples of a rejection of the outside view.

Comment author: taw 10 December 2010 11:41:09PM 8 points [-]

Here's my view of current lesswrong situation.

On compartmentalization failure and related issues there are two schools present on less wrong:

Right now there doesn't seem to be any hope of reaching Aumann agreement between these points of view, and at least some members of both camps view many of other camp's ideas with contempt. The primary reason seems to be that the kind of arguments that people on one end of the spectrum find convincing people on the other end see as total nonsense, and with full reciprocity.

Of course there's plenty of issues on which both views agree as well - like religion, evolution, akrasia, proper approach to statistics, and various biases (I think outside viewers seem to demand more evidence that these are also a problem outside laboratory than inside viewers, but it's not a huge disagreement). And many other disagreements seem to be unrelated to this.

Is this outside-viewers/pro-compartmentalization/firm-rooting-in-experience/caution vs weak-inside-viewers/anti-compartmentalization/pure-reason/taking-ideas-seriously spectrum only my impression, or do other people see it this way as well?

I might be very well biased, as I feel very strongly about this issue, and the most prominent poster Eliezer seems to feel very strongly about this in exactly the opposite way. It seems to me that most people here have reasonably well defined position on this issue - but I know better than to trust my impressions of people on an internet forum.

And second question - can you think of any good way for people holding these two positions to reach Aumann agreement?

As for cryonics it's a lot of number crunching, textbook economics, outside view arguments etc. - all leading to very very low numbers. I might do that someday if I'm really bored.

Comment author: Will_Newsome 12 December 2010 01:04:50AM 2 points [-]

Phrasing it as pro-compartmentalization might cause unnecessary negative affect for a lot of aspiring rationalists here at LW, though I'm too exhausted to imagine a good alternative. (Just in case you were planning on writing a post about this or the like. Also, Anna Salamon's posts on compartmentalization were significantly better than my own.)

Comment author: taw 13 December 2010 06:22:06AM 1 point [-]

A quick observation: Perfect Bayesian mind is impossible to actually build, that much we all know, and nobody cares.

But it's a lot worse - it is impossible even mathematically - even if we expected as little from it as consistently following the rule that P(b|a)=P(c|b)=100% implies P(c|a)=100% (without getting into choice of prior, infinite precision, transfinite induction, uncountable domains etc., merely the merest minimum still recognizable as Bayesian inference) over unbounded but finite chains of inference over countable set of statements, it can trivially solve the halting problem.

Yes, it will always tell you which theorem is true, and which is false, Goedel theorem be damned. It cannot say anything like P(Riemann hypothesis|basic math axioms)=50% as this automatically implies a violation of Bayes rule somewhere in the network (and there are no compartments to limit damage once it happens - the whole network becomes invalid).

Perfect Bayesian minds people here so willingly accepted as the gold standard of rationality are mathematically impossible, and there's no workaround, and no approximation that is of much use.

Ironically, perfect Bayesian inference systems works really well inside finite or highly regular compartments, with something else limiting its interactions with rest of the universe.

If you want an outside view argument that this is a serious problem, if Bayesian minds were so awesome, how is it that even in the very limited machine learning world, Bayesian-inspired systems are only one of many competing paradigms, better applicable to some compartments, not working well in others.

I realize that I just explicitly rejected one of the most basic premises accepted by pretty much everyone here, including me until recently. It surprised me that we were all falling for something as obvious retrospectively.

Robin Hanson's post on contrarians being wrong most of the time was amazingly accurate again. I'm still not sure which ideas I've came to believe that relied on perfect Bayesian minds being gold standard of rationality I'll need to reevaluate, but it doesn't bother me as much now that I fully accepted that compartmentalization is unavoidable, and a pretty good thing in practice.

I think there's a nice correspondence between outside view and set of preferred reference classes and Bayesian inference and set of preferred priors. Except outside view can be very easily extended to say "I don't know", estimate accuracy of itself as applied to different compartments, give more complex answers, evolve in time by reference classes formerly too small to be of any use now having enough data to return useful answers, and so on.

For very simple systems, these two should correspond to each other in a straightforward way. For complex systems, we have a choice of sometimes answering "I don't know" or being inconsistent.

I wanted to write this as a top level post, but "one of your most cherished beliefs is totally wrong, here's a sketch of mathematical proof" post would take a lot more effort to write well.

I tried a few extensions of Bayesian inference that I hoped would be able to deal with it, but this is really fundamental.

You can still use subjective Bayesian worldview - that P(Riemann hypothesis|basic math axioms)=50% is just your intuition. But you must accept that your probabilities can change with no new data, by just more thinking. This sort of Bayesian inference is just another tool of limited use, with biases, inconsistencies, and compartments protecting it from rest of the universe.

There is no gold standard of rationality. There simply isn't. I have a fall back position of outside view, otherwise it would be about as difficult to accept this as a Christian finally figuring out there is no God, but still wanting to keep the good parts of his or her faith.

Would anyone be willing to write a top level post out of my comment? You'll either be richly rewarded by a lot of karma, or we'll both be banned.

Comment author: David_Gerard 12 December 2010 01:08:23AM 1 point [-]

I'm trying to write up something on this without actually giving readers fear of ideas. I think I could actually scare the crap out of people pretty effectively, but, ah. (This is why it's been cooking for two months and is still a Google doc of inchoate scribbles.)

Comment author: [deleted] 11 December 2010 05:16:31AM 2 points [-]

Thanks, I now understand what you mean. I'll have to think further about this.

Personally, I find myself strongly drawn to the anti-compartmentalization position. However, I had bad enough problems with it (let's just say I'm exactly the kind of person that becomes a fundamentalist, given the right environment) that I appreciate an outside view and want to adopt it a lot more. Making my underlying assumptions and motivations explicit and demanding the same level of proof and consistency of them that I demand from some belief has served me well - so far anyway.

Also, I'd have to admit that I enjoy reading disagreements most, even if just for disagreement's sake, so I'm not sure I actually want to see Aumann agreement. "Someone is wrong on the internet" syndrome has, on average, motivated me more than reasonable arguments, I'm afraid.

Comment author: taw 11 December 2010 08:00:25AM 0 points [-]

I enjoy reading disagreements most

Does it seem to you as well that removing downvote for comments (keeping report for spam and other total garbage etc.) would result in more of this? Hacker News seems to be doing a lot better than subreddits of similar size, and this seems like the main structural difference between them.

Comment author: multifoliaterose 11 December 2010 04:09:27AM *  2 points [-]

I know what you're talking about here.

Comment author: David_Gerard 11 December 2010 01:20:49AM *  1 point [-]

And second question - can you think of any good way for people holding these two positions to reach Aumann agreement?

Sure: compartmentalisation is clearly an intellectual sin - reality is all one piece - but we're running on corrupt hardware so due caution applies.

That's my view after a couple of months' thought. Does that work for you?

(And that sums up about 2000 semi-readable words of inchoate notes on tne subject. (ctrl-C ctrl-V))

In the present Headless Chicken Mode, by the way, Eliezer is specifically suggesting compartmentalising the very bad idea, having seen people burnt by it. There's nothing quite like experience to help one appreciate the plus points of compartmentalisation. It's still an intellectual sin, though.

Comment author: taw 11 December 2010 07:09:03AM 5 points [-]

Sure: compartmentalisation is clearly an intellectual sin

Compartmentalisation is an "intellectual sin" in certain idealized models of reasoning. Outside views says that not only 100% of human level intelligences in the universe, but 100% of thing even remotely intelligent-ish were messy systems that used compartmentalisation as one of basic building blocks, and 0% were implementations of these idealized models - and that in spite of many decades of hard effort, and a lot of ridiculous optimism.

So by outside view the only conclusion I see is that models condemning compertmentalisation are all conclusively proven wrong, and nothing they say about actual intelligent beings is relevant.

reality is all one piece

And yet we our organize knowledge about reality into extremely complicated system of compartments.

Attempts at abandoning that and creating one theory of everything like objectivism (Ayn Rand famously had an opinion about absolutely everything, no disagreements allowed) are disastrous.

but we're running on corrupt hardware so due caution applies.

I don't think our hardware is meaningfully "corrupt". All thinking hardware ever made and likely to be made must take appropriate trade-offs and use appropriate heuristics. Ours seems to be pretty good most of the time when it matters. Shockingly good. Expecting some ideal reasoner that has no constraints is not only physically impossible, it's not even mathematically possible by Rice Theorem etc.

Compertmentalisation is one of the most basic techniques for efficient reasoning with limited resources - otherwise complexity explodes far more than linearly, and plenty of ideas that made a lot of sense in old context are now transplanted to another context where they're harmful.

The hardware stays what it was and it was already pretty much fully utilized, so to deal with this extra complexity model needs to be either prunned of a lot of detail mind could otherwise manage just fine, and/or other heuristics and shortcuts, possibly with far worse consequences need to be empleyed a lot more aggressively.

I like this pro-compartmentalization theory, but it is primarily experience which convinces me that abandoning compartmentalization is dangerous and rarely leads to anything good.

Comment author: taw 10 December 2010 04:36:57PM *  1 point [-]

Scientology was doing that from the very beginning

Quick reading suggests that Hubbard first founded "dianetics" in late 1949/early 1950, and it became "scientology" only in late 1953/early 1954. As far as I can tell it took them many years until they became Scientology we know. There's some evidence of evaporative cooling at that stage.

And just as David Gerard says, modern Scientology is extreme case. By cult I meant something more like objectivists.

Comment author: David_Gerard 10 December 2010 04:55:27PM *  6 points [-]

The Wikipedia articles on Scientology are pretty good, by the way. (If I say so myself. I started WikiProject Scientology :-) Mostly started by critics but with lots of input from Scientologists, and the Neutral Point Of View turns out to be a fantastically effective way of writing about the stuff - before Wikipedia, there were CoS sites which were friendly and pleasant but rather glaringly incomplete in important ways, and critics' sites which were highly informative but frequently so bitter as to be all but unreadable.

(Despite the key rule of NPOV - write for your opponent - I doubt the CoS is a fan of WP's Scientology articles. Ah well!)