Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Feedback on LW 2.0
Comment author: Daniel_Burfoot 01 October 2017 05:17:11PM *  14 points [-]

First, I appreciate the work people have done to make LW 2 happen. Here are my notes:

  1. Strong feeling - the links and descriptions of the Sequences, the Codex, and HPMOR (while good) should not be at the top of the page. The top should be the newest material.
  2. Please please please include a "hide subthread" option to collapse a comment and all its responses. That is a dealbreaker for me, if a site doesn't have that feature, I won't read the comments.
  3. Current LW has a really nice alternating color scheme for comment/reply. One comment will have a grey background, the comment below it will have a beige background. That is a key feature for visually parsing a comment thread.
  4. I liked the concept of having a main section and a discussion section, where the bar for posting in the latter is lower. For whatever reason, people seem to get angry if you post something that they feel is low quality or not relevant.
  5. I can't put my finger on it exactly, but somehow I don't quite like the default font. It may be that I like a different font for reading on dead tree paper vs on a computer screen?
  6. It may be slightly evil, but the karma display on the right side of the screen makes the site more addictive, because people love to see if they get upvotes or comment replies.
  7. It seems weird to allow people to upvote/downvote an article right from the home page, do you really want people to vote for an article without reading it?
Comment author: Raelifin 05 October 2017 12:10:06AM 2 points [-]
Comment author: Vaniver 23 November 2015 11:12:53PM 2 points [-]

I think the epistemology here is cleaner than most academic stuff and is at least as helpful as general self-help (again: probably biased; YMMV). But if the fear is that Intentional Insights is going to spoil the broth, I'd say that you should be aware that things like https://www.stephencovey.com/7habits/7habits.php already exist.

This strikes me as a weird statement, because 7 Habits is wildly successful and seems very solid. What about it bothers you?

(My impression is that "a word to the wise is sufficient," and so most clever people find it aggravating when someone expounds on simple principles for hundreds of pages, because of the implication that they didn't get it the first time around. Or they assume it's less principled than it is.)

Comment author: Raelifin 24 November 2015 10:02:03PM 2 points [-]

I picked 7 Habits because it's pretty clearly rationality in my eyes, but is distinctly not LW style Rationality. Perhaps I should have picked something worse to make my point more clear.

Comment author: Lumifer 23 November 2015 04:15:06PM 0 points [-]

any appearance of a status-hungry manipulator

I can't speak for other people, of course, but he never looked much like a manipulator. He looks like a guy who has no clue. He doesn't understand marketing (or propaganda), the fine-tuned practice of manipulating people's minds for fun and profit. He decided he needs to go downmarket to save the souls drowning in ignorance, but all he succeeded in doing -- and it's actually quite impressive, I don't think I'm capable of it -- is learning to write texts which cause visceral disgust.

Notice the terms in which people speak of his attempts. It's not "has a lot of rough edges", it's slime and spiders in human skin and "painful" and all that. Gleb's writing does reach System I, but the effect has the wrong sign.

Comment author: Raelifin 23 November 2015 05:39:41PM 1 point [-]

Ah, perhaps I misunderstood the negative perception. It sounds like you see him as incompetent, and since he's working with a subject that you care about that registers as disgusting?

I can understand cringing at the content. Some of it registers that way to me, too. I think Gleb's admitted that he's still working to improve. I won't bother copy-pasting the argument that's been made elsewhere on the thread that the target audience has different tastes. It may be the case that InIn's content is garbage.

I guess I just wanted to step in and second jsteinhardt's comment that Gleb is a very growth-oriented and positive, regardless of whether his writing is good enough.

Comment author: Lumifer 23 November 2015 03:59:40PM 1 point [-]

the goal of raising the sanity waterline is a good one, and rationalists should support the attempt

That does not follow at all.

The road to hell is in excellent condition and has no need of maintenance. Having a good goal in no way guarantees that what you do has net benefit and should be supported.

Comment author: Raelifin 23 November 2015 05:27:49PM 1 point [-]

I agree! Having good intentions does not imply the action has net benefit. I tried to communicate in my post that I see this as a situation where failure isn't likely to cause harm. Given that it isn't likely to hurt, and it might help, I think it makes sense to support in general.

(To be clear: Just because something is a net positive (in expectation) clearly doesn't imply one ought to invest resources in supporting it. Marginal utility is a thing, and I personally think there are other projects which have higher total expected-utility.)

Comment author: Raelifin 23 November 2015 03:28:27PM *  7 points [-]

Okay well it seems like I'm a bit late to the discussion party. Hopefully my opinion is worth something. Heads up: I live in Columbus Ohio and am one of the organizers of the local LW meetup. I've been friends with Gleb since before he started InIn. I volunteer with Intentional Insights in a bunch of different ways and used to be on the board of directors. I am very likely biased, and while I'm trying to be as fair as possible here you may want to adjust my opinion in light of the obvious factors.

So yeah. This has been the big question about Intentional Insights for its entire existence. In my head I call it "the purity argument". Should "rationality" try to stay pure by avoiding things like listicles or the phrase "science shows"? Or is it better to create a bridge of content that will move people along the path stochastically even if the content that's nearest them is only marginally better than swill? (<-- That's me trying not to be biased. I don't like everything we've made, but when I'm not trying to counteract my likely biases I do think a lot of it is pretty good.)

Here's my take on it: I don't know. Like query, I don't pretend to be confident one way or the other. I'm not as scared of "horrific long-term negative impact", however. Probably the biggest reason why is that rationality is already tainted! If we back off of the sacred word, I think we can see that the act of improving-how-we-think exists in academia more broadly, self-help, and religion. LessWrong is but a single school (so to speak) of a practice which is at least as old as philosophy.

Now, I think that LW style rationality is superior than other attempts at flailing at rationality. I think the epistemology here is cleaner than most academic stuff and is at least as helpful as general self-help (again: probably biased; YMMV). But if the fear is that Intentional Insights is going to spoil the broth, I'd say that you should be aware that things like https://www.stephencovey.com/7habits/7habits.php already exist. As Gleb has mentioned elsewhere on the thread, InIn doesn't even use the "rationality" label. I'd argue that the worst thin InIn does to pollute the LW meme-pool is that there are links and references to LW (and plenty of other sources, too).

In other words, I think at worst* InIn is basically just another lame self-help thing that tells people what they want to hear and doesn't actually improve their cognition (a.k.a. the majority of self-help). At best, InIn will out-compete similar things and serve as a funnel which pulls people along the path of rationality, ultimately making the world a nicer, more sane place. Most of my work with InIn has been for personal gain; I'm not a strong believer that it will succeed. What I do think, though, is that there's enough space in the world for the attempt, the goal of raising the sanity waterline is a good one, and rationalists should support the attempt, even if they aren't confident in success, instead of getting swept up in the typical-mind fallacy and ingroup/outgroup and purity biases.

* - Okay, it's not the worst-case scenario. The worst-case scenario is that the presence of InIn aggravates the lords of the matrix into torturing infinite copies of all possible minds for eternity outside of time. :P

(EDIT: If you want more evidence that rationality is already a polluted activity, consider the way in which so many people pattern-match LW as a phyg.)

Comment author: jsteinhardt 20 November 2015 05:15:28PM 11 points [-]

My main update from this discussion has been a strong positive update about Gleb Tsipursky's character. I've been generally impressed by his ability to stay positive even in the face of criticism, and to continue seeking feedback for improving his approaches.

Comment author: Raelifin 23 November 2015 02:35:40PM 5 points [-]

I just wanted to interject a comment here as someone who is friends with Gleb in meatspace (we're both organizers of the local meetup). In my experience Gleb is kinda spooky in the way he actually updates his behavior and thoughts in response to information. Like, if he is genuinely convinced that the person who is criticizing him is doing so out of a desire to help make the world a more-sane place (a desire he shares) then he'll treat them like a friend instead of a foe. If he thinks that writing at a lower-level than most rationality content is currently written will help make the world a better place, he'll actually go and do it, even if it feels weird or unpleasant to him.

I'm probably biased in that he's my friend. He certainly struggles with it sometimes, and fails too. Critical scrutiny is important, and I'm really glad that Viliam made this thread, but it kinda breaks my heart that this spirit of actually taking ideas seriously has led to Gleb getting as much hate as it has. If he'd done the status-quo thing and stuck to approved-activities it would've been emotionally easier.

(And yes, Gleb, I know that we're not optimizing for warm-fuzzies. It still sucks sometimes.)

Anyway, I guess I just wanted to put in my two (biased) cents that Gleb's a really cool guy, and any appearance of a status-hungry manipulator is just because he's being agent-y towards good ends and willing to get his hands dirty along the way.

Comment author: AndreInfante 16 October 2015 07:53:43AM 2 points [-]

According to the PM I got, I had the most credible vegetarian entry, and it was ranked as much more credible than my actual (meat-eating) beliefs. I'm not sure how I feel about that.

Comment author: Raelifin 16 October 2015 01:15:05PM 2 points [-]

Impostor entries were generally more convincing than genuine responses. I chalk this up to impostors trying harder to convince judges.

But who knows? Maybe you were a vegetarian in a past life! ;)

Comment author: Illano 15 October 2015 02:25:36PM *  2 points [-]

One thing that surprised me when looking at the data, is it appears that omnivores did slightly better at getting the answers 'right' (as determined by a simple greater or less than 50% comparison). I would have thought the vegetarians would do better, as they would be more familiar with the in-group terminology. That said, I have no clue if the numbers are even significant given the size of the group, so I wouldn't read too much into it. (Apologize in advance for awful formatting)

Number 'correct' - 1 2 3 4 5 6 7 8 9 10 Grand Total
Omnivore---------- 1 0 1 5 3 8 7 3 0 1 29
Vegetarian-------- 0 0 2 1 5 4 2 0 0 0 14
Comment author: Raelifin 15 October 2015 05:44:37PM *  1 point [-]

You're right, but I'm pretty confident that the difference isn't significant. We should probably see it as evidence that rationalists omnivores are about as capable as rationalist vegetarians.

If we look at average percent of positive predictions (predictions that earn more than 0 points):

Omnivores: 51%

Vegetarians: 46%

If we look at non-negative predictions (counting 50% predictions):

Omnivores: 52%

Vegetarians: 49%

Comment author: gjm 14 October 2015 12:56:57PM *  6 points [-]

every single judge thought themselves decently able to discern genuine writing from fakery. The numbers suggest that every single judge was wrong.

I think the first of these claims is a little too pessimistic, and the second may be too.

Here are some comments made by one of the judges (full disclosure: it was me) at the time. "I found these very difficult [...] I had much the same problem [sc. that pretty much every entry felt >50% credible]. [...] almost all my estimates were 40%-60% [...] I fear that this one [...] is just too difficult." I'm pretty sure (though of course memory is deceptive) that I would not have said that I thought myself "decently able to discern genuine writing from fakery". ("Almost all" was too strong, though, if I've correctly guessed which row in the table is mine. Four of my estimates were 70%. One was 99% but that's OK because that was my own entry, which I recognized. The others were all 40-60%. Incidentally, I got two of my four 70% guesses right and two wrong, and four of my eight 40%/60% guesses right and four wrong.)

On the second, I remark that judge 14 (full disclosure: this was definitely not me) scored better than +450 and got only two of the 13 entries wrong. The probability of any given judge getting 11/13 or better by chance is about 1%. [EDITED to add: As Douglas_Knight points out, it would be better to say 10/12 because judge 14 guessed 50% for one entry.] In a sample of 53 people you'll get someone doing this well just by chance a little over half the time. But wait, the two wrong ones were both 60/40 judgements, and judge 14 had a bunch of 70s and 80s and one 90 as well, all of them correct. With judge 14's probability assignments and random actual results, simulation (I'm too lazy to do it analytically) says that as good a logarithmic score happens only about 0.3% of the time. To figure out exactly what that says about the overall results we'd need some kind of probabilistic model for how people assign their probabilities or something, and I'm way too lazy for that, but my feeling is that judge 14's results are good enough to suggest genuinely better-than-chance performance.

If anyone wants to own up to being judge 14, I'd be extremely interested to hear what they have to say about their mental processes while judging.

Comment author: Raelifin 14 October 2015 08:24:23PM *  1 point [-]

As Douglas_Knight points out, it's only 10/12, a probability of ~0.016. In a sample of ~50 we should see about one person at that level of accuracy or inaccuracy, which is exactly what we see. I'm no more inclined to give #14 a medal than I am to call #43 a dunce. See the histogram I stuck on to the end of the post for more intuition about why I see these extreme results as normal.

I absolutely will fess up to exaggerating in that sentence for the sake of dramatic effect. Some judges, such as yourself, were MUCH less wrong. I hope you don't mind me outing you as one of the people who got a positive score, and that's a reflection of you being better calibrated. That said, if you say "I'm 70% confident" four times, and only get it right twice, that's evidence that you were still (slightly) overconfident when you thought "decently able to discern genuine writing from fakery".

Comment author: gjm 14 October 2015 01:00:53PM 3 points [-]

I should have been putting down 49%/51% at best

But we didn't have that option!

I suspect that at least some judges (including me, though I'm reconstructing rather than actually recalling my thought processes) (1) used 40/60 to indicate "meh, scarcely any idea but I lean this way rather than that" and then (2) felt like they had to use 30/70 for opinions one notch stronger, even though if evaluating them in a vacuum they might have chosen something more like 40% or 60% to represent them.

(In my case, at least, this doesn't make me look much better; even aside from the fact that that isn't how you should assign probabilities, I got exactly half of my eight 40/60 judgements right, and also exactly half of my four 30/70 judgements. I suppose that means I'm consistent, but not in a good way.)

Comment author: Raelifin 14 October 2015 08:14:11PM 3 points [-]

In retrospect I ought to have included options closer to 50%. I didn't expect that they'd be so necessary! You are absolutely right, though.

A big part of LessWrong, I think, is learning to overcome our mental failings. Perhaps we can use this as a lesson that the best judge writes down their credence before seeing the options, then picks the option that is the best match to what they wrote. I know that I, personally, try (and often fail) to use this technique when doing multiple-choice tests.

View more: Next