Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Should you try to do good work on LW?

36 Post author: cousin_it 05 July 2012 12:36PM

I used to advocate trying to do good work on LW. Now I'm not sure, let me explain why.

It's certainly true that good work stays valuable no matter where you're doing it. Unfortunately, the standards of "good work" are largely defined by where you're doing it. If you're in academia, your work is good or bad by scientific standards. If you're on LW, your work is good or bad compared to other LW posts. Internalizing that standard may harm you if you're capable of more.

When you come to a place like Project Euler and solve some problems, or come to OpenStreetMap and upload some GPS tracks, or come to academia and publish a paper, that makes you a participant and you know exactly where you stand, relative to others. But LW is not a task-focused community and is unlikely to ever become one. LW evolved from the basic activity "let's comment on something Eliezer wrote". We inherited our standard of quality from that. As a result, when someone posts their work here, that doesn't necessarily help them improve.

For example, Yvain is a great contributor to LW and has the potential to be a star writer, but it seems to me that writing on LW doesn't test his limits, compared to trying new audiences. Likewise, my own work on decision theory math would've been held to a higher standard if the primary audience were mathematicians (though I hope to remedy that). Of course there have been many examples of seemingly good work posted to LW. Homestuck fandom also has a lot of nice-looking art, but it doesn't get fandoms of its own.

In conclusion, if you want to do important work, cross-post it if you must, but don't do it for LW exclusively. Big fish in a small pond always looks kinda sad.

Comments (32)

Comment author: gwern 05 July 2012 04:06:35PM 16 points [-]

Yvain is a great contributor to LW and has the potential to be a star writer, but it seems to me that writing on LW doesn't test his limits, compared to trying new audiences. Likewise, my own work on decision theory math would've been held to a higher standard if the primary audience were mathematicians (though I hope to remedy that).

But would either of you have worked on them at all outside LW's confines? Ceteris paribus, it'd be better to work in as high-status rigorous a community as possible, but ceteris is never paribus.

Comment author: cousin_it 05 July 2012 04:56:50PM *  7 points [-]

Good point, I agree with RobertLumley and you that doing good work on LW is better than doing nothing. But if you're already doing good work by LW standards (and you are!), it probably makes sense to move beyond.

Comment author: Yvain 07 July 2012 05:28:38PM *  7 points [-]

Thank you for the compliments. I don't know much about mathematics, but if you've really proven a new insight into Godel's theorem that sounds very impressive and you should certainly try to get it published formally. But I'm not sure what you're suggesting I should do. I mean, I've been talking to Luke about maybe helping with some SIAI literature, but that's probably in the same category as here.

My career plan is medicine. I blog as a hobby. One day I would like to write about medicine, but I need a job and some experience before I can do that credibly. If you or anyone else have anything else specific and interesting/important you think I should be (or even could be) doing, please let me know. Bonus points if it involves getting paid.

Comment author: RobertLumley 05 July 2012 02:07:16PM *  18 points [-]

It seems important to note, however, that doing good work on LW is superior to not doing good work at all.

Edit:

In conclusion, if you want to do important work, cross-post it if you must, but don't do it for LW exclusively. Big fish in a small pond always looks kinda sad.

Most places that would be cross-posted with LW would be even smaller ponds. So I'm not sure this supports your point here.

Comment author: Viliam_Bur 07 July 2012 03:09:25PM 3 points [-]

This is too abstract for me. What exactly means "good work"? Let's say that I wrote some article, or just plan to write an article, and I ask myself -- is LW the right place to publish it?

Well, it depends on the topic, on the style of writing, and what kind of audience / reaction do I want. There are more things to consider than just "good" or "bad". Is it a Nobel price winning material? Then I send it to a prestigious scientific journal. Is it just a new idea I want feedback on? Then I guess whether I get better feedback on LW or somewhere else; again, the choices depend on topic. Something personal? I put it on my blog. Etc. Use your judgement.

It is also possible to write to a scientific journal and post a hyperlink on LW, or to write a concept on LW, collect feedback, improve the article and send it somewhere else.

Yes, if the work is optimized for LW, it is not optimized for <something else>. Different audiences require different styles. Maybe LW could have a subsection where the articles must be written using the scientific lingo (preferably in a two-column, single-spaced format), or maybe we are OK with drafts. I prefer legibility, journals prefer brevity, other people may prefer something else.

Comment author: Bruno_Coelho 06 July 2012 03:06:41AM 2 points [-]

Some veterans who do good work here could do in another place, if it's valuable enough. In that sense, the initial project was create people with strength to intervene effectively in the world. This community already have people with that level.

Comment author: Manfred 05 July 2012 11:43:23PM *  2 points [-]

Big fish in a small pond always looks kinda sad.

Well, it's often helpful for the smaller fish - perhaps the fish and pond metaphor is a bit misleading in this way. And, since we're here for many reasons, not just one ("size"), people may be big fish along some dimensions and smaller fish along others.

Comment author: John_Maxwell_IV 05 July 2012 06:36:38PM *  2 points [-]

I'm not sure I know of a better place to do philosophy (as Paul Graham defines it) than LW:

let's try to answer the question

Of all the useful things we can say, which are the most general?

...

One drawback of this approach is that it won't produce the sort of writing that gets you tenure.

(PG's example of a useful general idea is that of a controlled experiment.)

What specific new audiences do you think Yvain should try out?

Comment author: David_Gerard 05 July 2012 08:33:21PM 2 points [-]

Yvain's stuff is highly linkable elsewhere. His article is the go-to link for typical mind fallacy, for example.

Comment author: private_messaging 06 July 2012 03:20:53PM *  0 points [-]

It starts awesome, with imagination stuff, but then goes down addressing the local PUA crap. The comments, some are very insightful on the imagination and such, but the top is about the PUA crap. I actually recall I wanted to link it few years back, before I started posting much, but searched for some other link because I did not want the PUA crap.

Honestly, it would have been a lot better if Yvain started his own blog, and over time built up reader base. But few people have, i dunno, arrogance to do that (I recall he wrote that he underestimates himself, that may be why) and so we are stuck with primarily the people that overestimate themselves blogging, starting communities, etc.

Comment author: David_Gerard 06 July 2012 04:17:15PM 2 points [-]
Comment author: private_messaging 07 July 2012 06:17:51AM *  -2 points [-]

For interest of the discussion, here is the article in question

It actually is a perfect example of how LW is interested in science:

There is the fact that some people have no mental imagery, but live totally normal lives. That's amazing! They're more different than you usually imagine scifi aliens to be! And yet there is no obvious difference. It is awesome. How does that even work? Do they have mental imagery somewhere inside but no reflection on it? Etc, etc etc.

And the first thing that was done with this awesome fact here, was 'update' in the direction of trusting more the PUA community's opinion on women, rather than women themselves, and that was done by author. That's not even a sufficiently complete update, because the PUA community - especially the manipulative misogynists with zero morals and the ideal to become a clinical sociopath as per check list, along with their bragging that has selection bias and unscientific approach to data collection written all over it - is itself prone to typical mind fallacy (as well as a bunch of other fallacies) when they are seeing women as equally morally reprehensible beings as they themselves are.

This, cousin_it, is the case example why you shouldn't be writing good work for LW. Some time back you were on verge of something cool - perhaps even proving that defining the real world 'utility' is incredibly computationally expensive for UDT. Instead, well, yeah, there's the local 'consensus' on the AI behaviour and you explore for the potential confirmations of it.

Comment author: komponisto 07 July 2012 06:52:29PM *  4 points [-]

the manipulative misogynists with zero morals and the ideal to become a clinical sociopath as per check list, along with... [an] unscientific approach to data collection

A classic Arson, Murder, and Jaywalking right there.

Comment author: Rhwawn 07 July 2012 11:59:20PM 0 points [-]

I don't know, given the harm bad data collection can do, I'm not sure being a clinical sociopath is much worse.

Comment author: private_messaging 08 July 2012 01:32:11AM *  -1 points [-]

What ever data on physiology nazis collected correctly, we are relying on today. Even when very bad guys collect data properly the data is usable. When it's on-line bragging by people fascinated with 'negs'... not so much. It is a required condition that data is badly collected; the guys trying to be sociopaths does not suffice.

Comment author: paulfchristiano 07 July 2012 07:30:12PM 0 points [-]

Some time back you were on verge of something cool - perhaps even proving that defining the real world 'utility' is incredibly computationally expensive for UDT. Instead, well, yeah, there's the local 'consensus' on the AI behaviour and you explore for the potential confirmations of it.

You seem to be saying: "you were close to realizing this problem was unsolvable, but instead you decided to spend your time exploring possible solutions."

Generally, you seem to be continually frustrated about something to do with wireheading, but you've never really made your position clear, and I can't tell where it is coming from. Yes, it is easy to build systems which tear themselves to pieces, literally or internally. Do you have any more substantive observation? We see a path to building systems which have values over the real world. It is full of difficulties, but the wireheading problems seem understood and approachable / resolved. Can you clarify what you are talking about, in the context of UDT?

Comment author: private_messaging 08 July 2012 02:08:10AM *  2 points [-]

We see a path to building systems which have values over the real world.

The path he sees has values over internal model, but internal model is perfect AND it is faster than the real world, which stretches it a fair lot if you ask me. It's not really a path, he's simply using "sufficiently advanced model is indistinguishable from the real thing". And we still can't define what paperclips are if we don't know the exact model that will be used, as the definition is only meaningful in context of a model.

The objection I have is that it is a: unnecessary to define the values over real world (the alternatives work fine for e.g. finding imaginary cures for imaginary diseases which we make match real diseases), b: very difficult or impossible to define values over the real world, and c: values over real world are necessary for the doomsday scenario. If this can be narrowed down, then there's precisely the bit of AI architecture that has to be avoided.

We humans are messy creatures. It is very plausible (in light of potential irreducibility of 'values over real world') that we value internal states on the model, and we also receive negative reinforcement for model-world inconsistencies (when the model-prediction of the senses does not match the senses), resulting in learned preference not to lose correspondence between model and world, in place of straightforward "I value real paperclips therefore I value having a good model of the world" which looks suspiciously simple and poorly matches the observations (no matter how much you tell yourself you value real paperclips, you may procrastinate).

edit: and if I don't make my position clear, it looks so because I am opposed to fuzzy ill defined woo where the distinction between models and worlds is poorly defined and the intelligence is a monolithic blob. It's hard to define an objection to an ill defined idea which always off-shoots some anthropomorphic idea (e.g. wireheading gets replaced with real world goal to have a physical wire in a physical head that is to be kept alive with the wire).

Comment author: Viliam_Bur 09 July 2012 09:37:54AM 0 points [-]

It is very plausible [...] that we value internal states on the model, and we also receive negative reinforcement for model-world inconsistencies [...], resulting in learned preference not to lose correspondence between model and world

Generally correct; we learn to value good models, because they are more useful than bad models. We want rewards, therefore we want to have good models, therefore we are interested in the world out there. (For a reductionist, there must be a mechanism explaining why and how we care about the world.)

Technically, sometimes the most correct model is not the most rewarded model. For example it may be better to believe a lie and be socially rewarded by members of my tribe who share the belief, than to have a true belief that gets me killed by them. There may be other situations, not necessarily social, where the perfect knowledge is out of reach, and a better approximation may be in the "valley of bad rationality".

it is unnecessary to define the values over real world (the alternatives work fine for e.g. finding imaginary cures for imaginary diseases which we make match real diseases) [...] there's precisely the bit of AI architecture that has to be avoided.

In other words, make an AI that only cares about what is inside the box, and it will not try to get out of the box.

That assumes that you will feed the AI all the necessary data, and verify that the data is correct and complete, because the AI will be just as happy with any kind of data. If you give an incorrect information to AI, the AI will not care about it, because it has no definition of "incorrect"; even in situations where AI is smarter than you and could have noticed an error that you didn't notice. In other words, you are responsible for giving AI the correct model, and the AI will not help you with this, because AI does not care about correctness of the model.

Comment author: private_messaging 09 July 2012 11:20:32AM *  2 points [-]

You put it backwards.... making AI that cares about truly real stuff as the prime drive is likely impossible and certainly we don't know how to do that nor need to. edit: i.e. You don't have to sit and work and work and work and find how to make some positronic mind not care about the real world. You get this by simply omitting some mission-impossible work. Specifying what you want, in some form, is unavoidable.

Regarding verification, you can have the AI search for code that predicts the input data the best, and then if you are falsifying the data the code will include a model of your falsifications.

Comment author: fubarobfusco 07 July 2012 08:05:29AM -1 points [-]

And the first thing that was done with this awesome fact here, was 'update' in the direction of trusting more the PUA community's opinion on women, rather than women themselves, and that was done by author. That's not even a sufficiently complete update, because the PUA community - especially the manipulative misogynists with zero morals and the ideal to become a clinical sociopath as per check list, along with their bragging that has selection bias and unscientific approach to data collection written all over it - is itself prone to typical mind fallacy (as well as a bunch of other fallacies) when they are seeing women as equally morally reprehensible beings as they themselves are.

This is a really good point ...

This, cousin_it, is the case example why you shouldn't be writing good work for LW.

... which utterly fails to establish the claim that you attempt to use it for.

Comment author: private_messaging 07 July 2012 09:55:48AM *  4 points [-]

... which utterly fails to establish the claim that you attempt to use it for.

Context, man, context. cousin_it's misgivings are about the low local standards. This article is precisely a good example of such low local standards - and note that I was not picking a strawman here, it was chosen as example of the best. The article would have been torn to shreds in most other intelligent places (consider arstechnica observatory forum) for the bit that I am talking of.

edit: also on the 'good point': this is how a lot of rationality here is: handling partial updates incorrectly. You have a fact that affects literally every opinion that a person has on another person, you proceed to update in direction of confirmation of your opinions and your choice of what to trust. LW has awfully low standard on anything that agrees with local opinions. This also pops up in utility discussions, too. E.g. certain things (possibility of huge world) scale down all utilities in the system, leaving all actions unchanged. But the actual update that happens in agents that do not handle meta reasoning correctly for real-time system, updates some A before some B and then suddenly there are enormous difference between utilities. It's just a broken model. Theoretically speaking A being updated and B being not updated, is in some theoretical sense more accurate than neither being updated, but everything that is dependent to relation of A and B is messed up by partial update. The algorithms for real-time belief updating are incredibly non-trivial (as are the algorithms for Bayesian probability calculation on graphs in general, given cycles and loops). The theoretical understanding behind the rationalism here is just really, really, really poor.

Comment author: private_messaging 05 July 2012 04:07:45PM *  2 points [-]

Likewise, my own work on decision theory math would've been held to a higher standard if the primary audience were mathematicians (though I hope to remedy that).

Furthermore non mathematicians tend to hold work to some different standards. Focussing, for instance, on the verbal match of your verbal description of what goes on (the filler between equations), with the verbal descriptions in the cited papers, which tends to get irritating in the advanced and/or confusing subjects. Additionally the mathematicians would easily find something tailored to non-mathematicians overly verbose but very insufficiently formal, which drops their expected utility from reading the text. (Instead of having the description of what you are actually doing with equations, designed to help understand your intent, you end up having, in parallel, a handwaved argument with the same conclusion)

Comment author: Vaniver 14 September 2012 01:23:03PM 0 points [-]

Homestuck fandom also has a lot of nice-looking art, but it doesn't get fandoms of its own

I've been thinking about this for a while, and while I originally agreed with it I don't think I do anymore. Prequel has its own fandom now, and I think has significantly pushed the boundaries of the medium with its recent updates. Similarly, Fallout: Equestria has a fandom of its own; HPMOR has its own fandom, and though I hate to mention it in the same breath as those, so does 50 Shades of Grey.