I used to advocate trying to do good work on LW. Now I'm not sure, let me explain why.
It's certainly true that good work stays valuable no matter where you're doing it. Unfortunately, the standards of "good work" are largely defined by where you're doing it. If you're in academia, your work is good or bad by scientific standards. If you're on LW, your work is good or bad compared to other LW posts. Internalizing that standard may harm you if you're capable of more.
When you come to a place like Project Euler and solve some problems, or come to OpenStreetMap and upload some GPS tracks, or come to academia and publish a paper, that makes you a participant and you know exactly where you stand, relative to others. But LW is not a task-focused community and is unlikely to ever become one. LW evolved from the basic activity "let's comment on something Eliezer wrote". We inherited our standard of quality from that. As a result, when someone posts their work here, that doesn't necessarily help them improve.
For example, Yvain is a great contributor to LW and has the potential to be a star writer, but it seems to me that writing on LW doesn't test his limits, compared to trying new audiences. Likewise, my own work on decision theory math would've been held to a higher standard if the primary audience were mathematicians (though I hope to remedy that). Of course there have been many examples of seemingly good work posted to LW. Homestuck fandom also has a lot of nice-looking art, but it doesn't get fandoms of its own.
In conclusion, if you want to do important work, cross-post it if you must, but don't do it for LW exclusively. Big fish in a small pond always looks kinda sad.
For interest of the discussion, here is the article in question
It actually is a perfect example of how LW is interested in science:
There is the fact that some people have no mental imagery, but live totally normal lives. That's amazing! They're more different than you usually imagine scifi aliens to be! And yet there is no obvious difference. It is awesome. How does that even work? Do they have mental imagery somewhere inside but no reflection on it? Etc, etc etc.
And the first thing that was done with this awesome fact here, was 'update' in the direction of trusting more the PUA community's opinion on women, rather than women themselves, and that was done by author. That's not even a sufficiently complete update, because the PUA community - especially the manipulative misogynists with zero morals and the ideal to become a clinical sociopath as per check list, along with their bragging that has selection bias and unscientific approach to data collection written all over it - is itself prone to typical mind fallacy (as well as a bunch of other fallacies) when they are seeing women as equally morally reprehensible beings as they themselves are.
This, cousin_it, is the case example why you shouldn't be writing good work for LW. Some time back you were on verge of something cool - perhaps even proving that defining the real world 'utility' is incredibly computationally expensive for UDT. Instead, well, yeah, there's the local 'consensus' on the AI behaviour and you explore for the potential confirmations of it.
You seem to be saying: "you were close to realizing this problem was unsolvable, but instead you decided to spend your time exploring possible solutions."
Generally, you seem to be continually frustrated about something to do with wireheading, but you've never r... (read more)