Less Wrong is extremely intimidating to newcomers and as pointed out by Academian something that would help is a document in FAQ form intended for newcomers. Later we can decide how to best deliver that document to new Less Wrongers, but for now we can edit the existing (narrow) FAQ to make the site less scary and the standards more evident.

Go ahead and make bold edits to the FAQ wiki page or use this post to discuss possible FAQs and answers in agonizing detail.

New Comment
111 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Jack240

So, am I the only one who thinks new users shouldn't be expected to read the sequences before participating? There are works of brilliance there but there are also posts that are far from required reading.

I mean, if a cognitive psychologist shows up and wants to teach us about some cool bias why the hell would she need to read about many worlds or Eliezer's coming of age as a rationalist?

What the FAQ should do is say what topics we've covered, what we think about them and from there link to posts in the sequences where our positions on those topics are covered in more depth. So if someone shows up they can look over the material, decide they want to talk to us about physics and read the posts on physics, and then say what they want to say.

Besides, if someone is just reading the new posts as they come they'll eventually pick up most of what is in the sequences just from links and repetition.

5byrnema
If I comment on Less Wrong, it's because factors conspire to make it worthwhile for me. That is, I participate because I find it fun or helpful. Often, I also find reading background material fun or helpful. But my response when I'm caught not having reading something -- this is thought but not spoken -- is that I will be dutiful about reading all the background material when I am being 'paid by the hour'. I am willing to suffer the down votes; the links that come with them are most efficient for me in the long run and help others who also don't have a mental map of all that has been discussed here. I have a "matching effort" policy, here and in life in general, where I exert more and more effort on a task as I find that the effort is rewarded. Expecting people to do a lot of work upfront, even before they have formed a positive opinion about Less Wrong, is unrealistic. Some people like to lurk for a while, but I presume there are others like me that want to immediately engage in the active experience of Less Wrong, or not bother. This is probably just a personality difference, whether people prefer to prepare first or just dive in.
3thomblake
In the current state of the FAQ as I read it, reading the sequences is a strong suggestion, and new users are warned that posting without reading the sequences first may result in downvotes and links to the relevant posts, if something is missed that we consider obvious. That said, I think we should have a simple, short-inferential-distance version of the main points of the relevant sequences (ideally without distracting crosslinking) that someone could skim over to make sure there aren't any major gaps in knowledge to worry about.
2RobinZ
You are far from the only such user - I agree with the edits you made to remove this propositional content from the FAQ.
0Airedale
I had just posted this on the same topic in the simultaneous and somewhat overlapping discussion on the Proposed New Features thread. I agree that new readers will come in with different interests and areas of expertise, and strongly suggesting that all of them read all of the sequences before posting (or even reading!) Less Wrong doesn't seem to make a lot of sense, if we're really trying to grow the community. It seems like a good idea to edit the FAQ in the way you suggested. I also suggested thread discussions for answering questions and directing new readers to reading that would be particularly useful to them; at least, I would suggest that if it turns out people here are generally willing to contribute to that sort of thread.
[-]Jack70

This is Eliezer's baby... but making the second question about him kind of screams "cult!" Objections to changing it?

2thomblake
Wholeheartedly agree. I doubt most people would care who he is upon encountering the site, though it should be somewhere on the FAQ, if only because his karma is so high my first hypothesis would be that he's a bot that's learned to game the system.

Why is "claim an objective morality" on the list of things you shouldn't post against consensus about? I'm a moral realist; historically this has gotten me only slightly heckled, not decried as an obvious amateur.

5Scott Alexander
How about "claim a universally compelling morality"?
2thomblake
Sure. Related: there are no universally compelling arguments
0Jack
What exactly would the domain for a universally compelling morality be? Too large a domain and it is trivially false, too small a domain and it might even be true.
0thomblake
Wait, doesn't Eliezer claim there's an objective morality?
8Tyrrell_McAllister
I would describe Eliezer's position as * standard relativism, * minus the popular confusion that relativism means that you would or could choose to find no moral arguments compelling, * plus the belief that nearly all humans would, with sufficient reflection, find nearly the same moral arguments compelling because of our shared genetic heritage. Eliezer objects to being called a relativist, but I think that this is just semantics.
5Richard_Kennaway
The third bullet goes so far beyond relativism that it seems quite justified to deny the word. If just about everyone everywhere is observed to have a substantial commonality in what they think right or wrong (whether or not genetic heritage has anything to do with it), then that's enough to call it objective, even if we do not know why it is so, how it came to be, or how it works. Knowledge may be imperfect, and people may disagree about it, but that does not mean that there is nothing that it is knowledge about. We can imagine Paperclippers, Pebblesorters, Baby Eaters, and Superhappies, but I don't take these imagined beings seriously except as interesting thought experiments, to be trumped if and when we actually encounter intelligent aliens. (BTW, regarding accessibility to newcomers: I just made four references that will be immediately obvious to any long-time reader, but completely opaque to any newcomer. A glossary page would be a good idea.) Paperclippers Pebblesorters Baby Eaters and Superhappies
0Jack
He's a subjectivist as well.
0Matt_Simpson
I think that's what Tyrrell means by standard relativism
1Jack
Well they're different. And Eliezer is both.
5thomblake
This partially depends on where you place 'ethics'. If ethics is worried about "what's right" in Eliezer's terms, then it's not relativist at all - the pebble-sorters are doing something entirely different from ethics when they argue. However, if you think the pebble-sorters are trying to answer the question "what should I do" and properly come up with answers that are prime, and you think that answering that question is what ethics is about, then Eliezer is some sort of relativist. And the answers to these questions will inform the question about subjectivism. In the first case, clearly what's right doesn't depend upon what anybody thinks about what's right. - it's a non-relativist objectivism. In the second case, there is still room to ask whether the correct answer to the pebblesorters asking "what should I do" depends upon their thoughts on the matter, or if it's something non-mental that determines they should do what's prime; thus, it could be an objective or subjective relativism.
0Tyrrell_McAllister
I've don't know of any relativists who aren't subjectivists. That article points out that non-subjectivist relativism is a logical possibility, but the article doesn't give any actual examples of someone defending such a position. I wonder if any exist.
3Jack
Hobbes might be a candidate if you're okay with distinguishing laws and dictates from the mental states of rulers.
0Matt_Simpson
The article does give an example: cultural relativism. Its objective in that it doesn't depend on the mind of the individual, but it's still relative to something: the culture you are in.
0Tyrrell_McAllister
That is not how I read it. There's a big parenthetical aside breaking up the flow, but excising that leaves (Bolding added.) So, either individualistic or cultural relativisms can be subjectivist. That leaves the possibility, in principle, that either could be non-subjectivist, but the article gives no example of someone actually staking out such a position. You continue: I think that cultural relativism is mind-dependent in the sense that the article uses the term.
0Matt_Simpson
ok, location relativism then. It's doesn't depend on your what's going on inside your head, but it's still relative.
0Tyrrell_McAllister
But is anyone a location-relativist for reasons that don't derive from being a cultural-relativist or a "sovereign-command" relativist (according to which the moral is whatever someone with lawful authority over you says it is)? Now that I think of it, though, certain kinds of non-subjectivist relativism are probably very common, if rarely defended by philosphers. I'm thinking of the claim that morality is whatever maximizes your genetic fitness, or morality is whatever maximizes your financial earnings (even if you have no desire for genetic fitness or financial earnings). These are relativisms because something might increase your genetic fitness (say) while it decreases mine. But they are not subjectivist because they measure morality according to something independent of anyone's state of mind.
0byrnema
I'm confused by the terminology, but I think I would be a relativist objectivist. I certainly think that morality is relative -- what is moral is agent-dependent -- but whether or not the agent is behaving morally is an objective fact about that agent's behavior, because the behavior either does or doesn't conform with that agent's morality. But I don't think the distinction between a relativist objectivist and a relativist subjectivist is terribly exciting: it just depends on whether you consider an agent 'moral' if it conforms to its morality (relativist objectivist) or yours (relativist subjectivist). But maybe I've got it wrong, because this view seems so reasonable, whereas you've indicated that it's rare.
3Jack
The key phrase for subjectivism is "mind dependent" so if you think other people's morality comes from their minds then you are a relativist subjectivist. I just realized I don't think people should conform to their own morality, I think people should conform to my morality which I guess would make me a subjective non-relativist.
2LucasSloan
So you believe that the word morality is a two-place word and means what an agent would want to do under certain circumstances? What word do you use to means what actually ought to be done? The particular thing that you and, to a large degree all humans would want to do under specified circumstances? Or do you believe there isn't anything that should be done other than what whatever agents exist want? Please note that that position is also a statement about what the universe ought to look like.
1byrnema
Yes, morality is a two-place word -- the evaluation function of whether an action is moral has two inputs: agent, action. "Agent" can be replaced by anything that conceivably has agency, so morality can be considered system-dependent, where systems include social groups and all humanity, etc. I wouldn't say morality is what the agent wants to do, but is what the agent ought to do, given its preferences. So I think I am still using it in the usual sense. I can talk about what I ought to do, but it seems to me I can't talk about what another agent ought to do outside their system of preferences. If I had their preferences, I ought to do what they ought to do. If they had my preferences, they ought to do what I ought to do. But to consider what they ought to do, with some mixture of preferences, isn't incoherent. I can have a preference for what another agent does, of course, but this is different than asserting a morality. For example, if they don't do what I think is moral, I'm not morally culpable. I don't have their agency.
0LucasSloan
As far as I can tell, we don't disagree on any matter of fact. I agree that we can only optimize our own actions. I agree that other agents won't necessarily find our moral arguments persuasive. I just don't agree that the words moral and ought should be used the way you do. To the greater LW community: Is there some way we can come up with standard terminology for this sort of thing? I myself have moved toward using the terminology used by Eliezer, but not everyone has. Are there severe objections to his terminology and if so, are there any other terminologies you think we should adopt as standard?
0Matt_Simpson
You're thinking of the wrong sense of objective. An objective morality, according to this article, is a morality that doesn't depend on the subject's mind. It depends on something else. I.e., if we were trying to determine what should_byrnema is, we wouldn't look at you're preferences, instead we would look somewhere else. So for example: * A nonrelativist objectivist would say that we would look at the one true universially compelling morality that's written into the fabric of reality (or something like that). So should_byrnema is just should, period. * A relativist objectivist might say (this is just one example - cultural relativism), that we would look for should_byrnema in the culture that you are currently embedded in. So should_byrnema is should_culture. I'm not sure that subjective nonrelativism is a possibility though.
2byrnema
I think "subjective" means based on opinion (a mind's assessment). If Megan-is-moral if she thinks she's moral, then the morality of Megan is subjective and depends on her mind. If Megan is moral if I think she's moral, then it's subjective and depends on my mind. I think that whether an agent is moral or not is a fact, and doesn't depend upon the opinion/assessment of any mind. But we would still look at the agent's preferences to determine the fact. I thought this was already described by the word 'relative'.
0Matt_Simpson
"Subjective" has many meanings. The article uses "subjective" to mean dependent on the mind in any way. Not just a mind's assessment. Given this definition of subjective, the article would classify your last paragraph as an example of subjective relativism.
0byrnema
I see. Just to clarify fully: in my last paragraph, morality depends on the mind because a mind is required for preferences and agency? Are there any exceptions to this?
1Matt_Simpson
yep I dunno, my concept of mind is too fuzzy to have an answer for that.
2byrnema
Thanks, I do understand the framework you're using, and can now say I don't agree with it. First, one wouldn't say that morality is subjective just because the morality of an entity depends upon its preferences and agency. Even an objective morality would usually apply moral judgments only to entities with preferences and agency. Second, subjective should mean that Megan's action could considered moral by Fred but not moral by Tom. In other words, the morality is determined by and depends upon someone's mind. In the relative objective morality I've been speaking of, neither Megan, Fred nor Tom gets to decide if Megan's action is moral. The morality of the action is a fact of and determined by the system of Megan, her action, and the context of that action. The morality of her action is something that could be computed by something without a mind, and the morality of her action doesn't depend on the computation actually being done.
0Matt_Simpson
I'm not using any framework here, just definitions. The article defined relative and subjective in certain ways in order to classify moral systems, and I've just been relating how the article defines these terms. There's only semantics here, no actual inference.
0byrnema
Using your framing regarding what it is that we are discussing (framings cannot be avoided), perhaps I disagree with your interpretation of the phrase 'mind dependent'. The article writes: The article does not actually define mind-dependent. I think that by "mind-dependent", the article means that it a mind that is doing the calculation and that assigns the morality, whereas if I am understanding your position (for example), you seem to think that "mind-dependent" means that an entity being labeled moral must have a mind. In the first paragraph of my last comment, I argued that this sense of mind-dependent would make "objective morality" more or less moot, because we hardly every talk about the morality of mindless entities. Tyrell McAllister writes: His understanding of subjectivist also seems to interpret 'mind-dependent' as requiring a mind to do the measuring.
0Matt_Simpson
We seem to be talking past each other, but I'm not entirely sure where the misunderstanding is, so I'll just lay out my view of what the article says again in different terms. A morality is subjective iff you have to look at the mind of an agent in order to determine whether they are moral. e.g., morality as preferences. A morality is objective iff you don't look at the mind of an agent in order to determine whether they are moral. For example, a single morality "written into the fabric of the universe," or a morality that says what is moral for an agent depends on where in the universe the agent happens to be (note that the former is not relative and the latter is, but I don't think we're disagreeing on what that means). In both cases, the only type of thing being called moral is something with a mind (whatever "mind" means here). The difference is whether or not you have to look inside the mind to determine the morality of the agent. So I'm not saying that mind dependent vs. indenpendent is the difference between having a mind and not having a mind, its the difference between looking at the mind that the agent is assumed to have and not looking at it.
5byrnema
That is more clear, but still describes what I thought I understood of your position. It's rather unconventional, so it took me a while to be certain what you meant. I think that 'subjective' means that a mind is assessing the morality. The key idea is that different minds could assign different moral judgements, so the judgement is mind-dependent. In contrast, any morality that considers the state of an agent's mind in the computation of that agent's morality can be either objective or subjective. For example, suppose it was written on a tablet, "the action of every agent is moral unless it is done with the purpose of harming another agent". The tablet-law is still objective, but the computation of the morality of an action depends on the agent's intention (and mind). I just experienced a flicker of a different understanding, that helps me to relate to your concept of subjective. Suppose there were two tablets: Tablet A: The action of every agent is moral unless it harms another agent. Tablet B: The action of every agent is moral unless it is done with the purpose of harming another agent. Tablet A measures morality based on the absolute, objective result of an action, whereas Tablet B considers the intention of an action. Whereas this is an important distinction between the tablets, we don't say that Tablet A is an objective morality and Tablet B is a subjective morality. There must be other terms for this distinction. I know that Tablet A is like consequentialism, and Tablet B includes, for example, virtue ethics.
0Matt_Simpson
I was just giving my interpretation of the article's definitions. Do you think my interpretation is unconventional? I don't think I disagree with you about how to parse mind-dependent, I've just been sloppy in putting it into a definition. I would call both tablet A and tablet B objective/mind independent So how about this for a definition of mind-dependent:
0Jack
If I understand you correctly this is my interpretation as well. But to clarify: there doesn't even have to be an agent in the judgment itself. Take the proposed judgment: "Black holes are immoral". This can either be subjective or objective. You are an objectivist if you look to something other than a mind to determine it's truth value. If you think the fact about whether or not black holes are immoral can be found by looking at the universe or examining black holes, you're an objectivist. If you ask "How do I feel about black holes", "How does my society feel about black holes" or "How does God feel about black holes" you are a subjectivist because to determine whether or not to accede to a judgment you examine a mind of minds. Edit: I just read byrnema's comment and now I think I probably don't agree with you. You could also be an objectivist or subjectivist about a judgement of a purely mental fact. Objectivist: Jealousy is immoral because it was written onto the side all quarks. Subjectivist: Jealousy is immoral because I don't like jealousy.
0byrnema
I agree with everything in your first paragraph, and was amazed it wasn't addressed to me. I can't believe how complicated this turns out being due to semantics. We could really use a good systemizer in the whole morality field, to clear the confusion of these tortuously ambiguous terms. (I should add that I'm not aware that there isn't one, but just skimming through this thread and its sisters seems to indicate one is needed.)
3Jack
The wikipedia entry turns out to be a really, really, excellent starting point.
1thomblake
As usual, SEP is more thorough but worse at giving you the at-a-glance summary.
1Jack
Lol, it might as well have been. I couldn't figure out which one of you had it wrong so I just replied to the most recent comment. I'll try to put together a map or diagram for positions in metaethics.
0thomblake
I'm not sure if we have a bona fide expert on metaethics hereabouts. Meta-anything gets squirrely if you're not being really careful.
0thomblake
Surely it's a logical possibility. Stipulate: "What's right is either X or Y, where we ask each person in the universe to think of a random integer, sum them, and pull off the last bit, 0 meaning X is right and 1 meaning Y is right." ETA: CEV, perhaps?
1Jack
Wouldn't "Everyone should do what my moral code says they should" be subjective nonrelativism? Surely there are lots of people who believe that.
2thomblake
I don't think the people who believe that, think that their own mental states are what determine the truth of their moral code.
1Matt_Simpson
Is CEV even an ethical theory? I thought it was more of an algorithm for extracting human preferences to put them in an AI.
0thomblake
Surely it's a de facto ethical theory, since it determines entirely what the FAI should do. But then, the FAI is not supposed to be a person, so that might make a difference for our use of 'ethical'.
0Matt_Simpson
hmm. Then wouldn't it be premised on subjective relativism? (relative to humans)
0thomblake
Yes, I'd considered that when I wrote it, but it's an odd use of 'relative' when it might be equivalent to 'the same for everyone'.
0Matt_Simpson
not all possible minds, just human minds EDIT: but if you thought all possible minds had the same preferences, then it would be subjective nonrelative, wouldn't it?
0thomblake
Maybe, though in that unlikely event I would suspect that there's some universal law behind that odd fact about preferences, in which case I'd think it would be objective.
0thomblake
Well I'm not sure we need to consider merely logically possible minds, and it's logically possible that non-human minds are physically impossible.
0RobinZ
Only in the sense that it logically possible that travel to Mars is physically impossible. The wording is deceptive.
0thomblake
I'm not sure what sense you're referring to, or what you're comparing it to, or how it's deceptive.
0RobinZ
Privileging the hypothesis, really.
0thomblake
I'm afraid that wasn't enough to clear it up for me. Nor is it clear how privileging the hypothesis is relevant to a discussion of logical possibility. Or are you claiming that was the wrong domain of inquiry?
1RobinZ
Saying "X is logically possible" bears the conversational implication that X is worth considering - it raises X to conscious attention. But when we're talking about physical possibility, "logically possible" is the wrong criterion for raising hypotheses to conscious attention, because epistemological limitations imply that every hypothesis is logically possible. Given that we have good physical reasons to draw the opposite conclusion in this case, it is generally a mistake to emphasize the possibility.
0thomblake
Ah, I see what you're getting at. But it is not that I was trying to emphasize the possibility that there cannot be non-human minds in order to argue in favor of that hypothesis. Rather, I was pointing out that whether CEV is 'relative' or not (for purposes of this discussion) is an empirical question. For reference, I would not guess that non-human minds are physically impossible (I'd assign significantly less than 10% probability to that hypothesis).
0Matt_Simpson
well then, I'm just not imaginative enough!
3thomblake
Once you've had to argue about ethics with logicians, it becomes natural. "But what if... (completely implausible hypothesis that no one believes)" comes up a lot.
0thomblake
I'm fairly certain you could find people implicitly arguing for some varieties of non-subjective relativism. For example, cultural relativism advances the view that one's culture determines the facts about ethics for oneself, but it's not necessarily mental acts on the part of persons in the culture that determine the facts about ethics. Similarly, Divine Command Theory will give you different answers for different gods, but it's not the mental acts of the persons involved that determine the facts about ethics.
0Tyrrell_McAllister
It's an interesting question. The SEP link in Jack's comment actually gives Divine Command Theory as an example of non-relativistic subjectivism. It's subjectivist because what is moral depends on a mental fact about that god — namely, whether that god approves. It's less clear whether cultural relativism is subjectivist. I'm inclined to think of culture as depending to a large extent on the minds of the people in that culture. (Different peoples whose mental content differed in the right way would have different cultures, even if their material conditions were otherwise identical.) This would make cultural relativism subjectivist as well.
0thomblake
Indeed, I was glossing over that distinction; if you think cultures or God have mental states, then that's a different story. There's also a question of how much "subjectivism" really depends on the relevant minds, and in what way. I could construct further examples, but we already understand it's logically possible, so that would not be of any help if nobody is advocating them. I think the well has run dry on my end w.r.t examples of relativism in the wild.
0Matt_Simpson
Ah, i see. I had always understood relativism to mean what the article calls subjective relativism.
0[anonymous]
Fair enough, that aren't a lot of subjective non-relativists left, lol.
0thomblake
If we're talking about the meanings of terms, how is semantics not a relevant question?
0Tyrrell_McAllister
You asked what Eliezer claims, not for the words that he uses to claim it.
7Matt_Simpson
Objective in the sense that you can point to it, but can't make it up according to your whims. But not objective in the sense of "being written into the fabric of the universe" or that every single agent, with enough reflection, would realize that its the "correct" morality.
1ata
I still haven't gotten through the metaethics sequence yet, so I can't answer that exactly, but if he believed in an "objective" morality (i.e. some definition of "should" that is meaningful from the perspective of fundamental reality, not based on any facts about minds, or an internally-consistent set of universally compelling moral arguments), then he would probably expect a superintelligence to be smart enough (many times over) to discover it and follow it, and that is quite the opposite of his current position. If I recall correctly, that was his pre-2002 position, and he now considers it a huge mistake.
3thomblake
"Fundamental reality" doesn't have a perspective, so it seems weird to draw the lines there. Rather, there's a fact about what's prime, and the pebblesorters care about that, and there's a fact about what's right, and humans care about that. We can be mistaken about what's right, and we can have disagreements about what's right, and we can change our minds. And given time and progress, we will hopefully get closer to understanding what's right. And if the pebblesorters claim that they care about what's right rather than what's prime, they're factually incorrect.
0ata
Of course — I was just doing my best to imagine the mindset of a non-religious person who believes in an objectively objective morality (i.e. that even in the absence of a deity, the universe still somehow imposes moral laws). Admittedly, I don't encounter too many of those (people who think they've devised universally compelling moral arguments are more common; even big-O Objectivists seem to just be an overconfident version of that), but I do still meet them from time to time, e.g. people who manage to believe in things like "natural law" or "natural rights" (as facts about the universe rather than facts about human minds) without theistic belief. All I was saying was that things like that are what the phrase "objective morality" make me think of, and that Eliezer's conclusions are different enough that I'm not sure they quite fit in the same category. His may be an "objective morality" by our best definitions of "objective" and "morality", but it could make people (especially new people) imagine all the wrong things.
0[anonymous]
Where did he?
0Richard_Kennaway
For example, here. Read the whole thing, not just this illustrative quote: That's a part of the metaethics sequence, to which this posting might be a suitable entry point, which says where he's going, and tells you what to read before going there.
2ata
"Objective morality" usually implies some outside force imposing morality, and the debate over metaethics (at least in the wider world of philosophy, if not on LW) is usually presented as a choice between that and relativism. If I'm understanding Eliezer's current position correctly, it's that morality is an objective fact about subjective minds. This quote sums it up quite well: Unfortunately, when people talk about "objective morality", they're usually talking about the commandments the Lord hath given unto us, or they're talking about coming up with a magical definition of "should" that automatically is the correct one for every being in the universe and doesn't depend on any facts about human minds, or they're talking about their great new fake utility function that correctly compresses all human values (at least all good human values, recursion be damned). I don't know how Eliezer feels about the terminology, but if it were up to me, I'd agree with advising against "claim[ing] an objective morality", if only so that people have to think about what parts of their arguments are more about words than reality.
0Will_Newsome
If I recall correctly it seemed that you mostly argued for an objective morality instead of using it as the (explicit or implicit) linchpin of a larger argument. The former is well and good but the latter is irritating (e.g. "Deity X must exist because there is an objective morality").
0Alicorn
If that's what was meant, then it shouldn't appear as a separate item in a list that also contains the unrelated injunction against easy solutions to FAI.

"We need a FAQ" is solution language.

Why do we think we need one? What appears to be the problem?

What is the desired outcome?

5Jack
So the hold of on solutions thing isn't wrong but in this case we've talked about how LW is difficult for newcomers many, many times before. An FAQ has long been something we agreed on, it hasn't gotten done for reasons of akrasia. In this case, postponing work on the FAQ because we need to keep talking about the problem is just going to make it less likely that the work gets done.
3Morendil
I remember the last time we had this discussion, my conclusion was that we needed "better newcomer orientation". An FAQ is a (possibly) necessary, and (probably) not sufficient, component of a set of solutions leading to the outcome "better newcomer orientation". What we are planning is, by the way, not literally an FAQ. The Lurkers thread didn't reveal frequent questions people had that we're not answering, or that they have trouble finding the answers to. It did reveal a frequent observation, namely that people find the site intimidating. I suspect that no amount of answering frequently-not-asked questions (in an out of the way page) is going to fix that. I do believe that more discussion of which kinds of top-level posts and which attitudes in the comment stream encourage or discourage participation could fix that.
4Jack
We should talk about this. We should also just write an FAQ. We don't need to postpone the latter for the former. The Lurkers aren't who the FAQ is for- if they've been lurking a while they've probably figured a lot out. But when new users show up who haven't been lurking the same topics have come up repeatedly.
5thomblake
Lots of lurkers who claim to be intimidated by our site, and lots of non-lurkers seemingly unfamiliar with our standards.
5Morendil
We can now do better than "lots", thanks to Kevin. Someone with some time on their hands could, for instance, tabulate the top-level comments among the 422 posted to "Attention Lurkers". I've trialed that on a small sample. Out of the 22 first comments, 11 say something that I interpret as "intimidated", 3 say something to the effect that they're no longer interested by the topics on LW, 8 say that they're lurkers but OK with it (or say nothing beyond "hi"). So that's roughly half of them explicitly saying they're intimidated. The more salient fact to me is that all 22 did write a comment when encouraged to do so and the barrier to participation was suitably lowered. Another salient comment: "Anytime anyone wants to discuss prenatal diagnosis and the ethical implications, let me know", that being the commenter's area of expertise. We may be missing out on many opportunities to engage, by failing to deliberately open up discussions on topics where the community has hidden expertise. I'm thinking I will write up a poll-type post asking people what their area of professional expertise is, and which issue in their domain they think would most benefit from application of the techniques discussed on LW.
4RobinZ
...and which techniques of their domain could benefit LW by being discussed, I would add.
0Kevin
Are you definitely going to do that "Ask Less Wrong"? I want to post it now but don't want to take your karma/status for having that idea... so if you don't plan on making it in the next 24 hours, can I make it? It can really just be a question, the post itself should be very short.
0Morendil
I'd prefer to sleep on it. This isn't quite a spur-of-the-moment idea, I've had this idea for a post setting forth a "marketplace" metaphor for such discussions for a while. But possibly the post asking for expertise info should be separate from that anyway, for housekeeping reasons. Should probably happen within 24h anyway, but we've had a fair number of posts just today, so it's best to let things calm down a bit. ETA: it's not quite "Ask LW", more like "Tell LW". ;)
0Kevin
The desired outcome was that I made this top-level post right before going to sleep, then other people expanded and improved the FAQ as a result of me calling attention to it, which seems to have happened.
0RobinZ
Normally a good question, but it's been answered already: the community is intimidating to new contributors. There are lots of frequently asked questions, and they deserve answering.

Great idea, Kevin. I would also suggest adding the FAQ to the About page here: http://lesswrong.com/lw/1/about_less_wrong/, to allow new users to find it more easily.

Just thought I'd jump in to say that, when I was a newcomer, the most confusing thing for me were constant references to AI and FAI. To be honest, I am still left puzzled by such discussions. I would suggest the FAQ contain a brief outline of what FAI is, and if anybody knows a basic-level post about it, I'd be personally obliged.

[-]Jack20

What tone do people think the FAQ should take? Right now it is pretty serious and straight forward, jokes would make us less intimidating. But maybe that is a bad idea.

3RobinZ
It's a reference - a serious tone is appropriate for people jumping in to quickly find small amounts of data. In a "Quick-Start Guide" or the like*, a bit of levity would be appropriate. * I have a file on my hard disk which was supposed to become this, but hasn't been touched since March.
[-][anonymous]10

Well, as an idea: Should really "What is karma?" be the first entry?

1Jack
Maybe reverse the subheadings, so that the questions about what we think come before the details of how to use the site?

Less Wrong needs a general forum, not just an FAQ

0RobinZ
I think tommccabe was discussing this in Proposed New Features for Less Wrong - it would be better to keep the threads separate.

How do I format my comments?

Instructions are provided from the "Help" link under each comment box. The usual things are as follows:

  • Italics: *text in italics*
  • Bold: **text in bold**
  • Links: [link text](link URL)

Quick aside: if your link URL has parentheses in them, you will need to "escape" the close-paren. Insert a backslash character ("\") into the URL in front of the close-paren.

  • Blockquotes: at the beginning of a line: > quoted text

More information about the Markdown syntax can be found at daringfireball.net.

3NancyLebovitz
I suggest adding that link to the help sheet or the about page-- I had no idea there were formatting options beyond what was on the help sheet.
0Jack
Just add it to the wiki.
0RobinZ
Where? Under "Feedback", before "What's all this about upvotes and downvotes"?
0Jack
Put it after 5.1
2RobinZ
After "How do I submit an article"? When people will be mathematically certain to submit comments before they ever submit an article? I have to say that I don't like the FAQ as it stands - the entire thing strikes me as patronizing and hostile. I'll contribute, but I'm not going to be happy about it.
2Jack
I mean, do whatever makes sense. It's our FAQ we can do what we want to. If something doesn't work we can change it. I don't like the recent edit either. If you can make it less patronizing and hostile, do!
0RobinZ
I'll see when I can scrape together enough motivation to tackle it - looking at it is leaving me rather frustrated, as I implied.
4Jack
I've edited one of the subsections to make it less patronizing.
0RobinZ
Very nice!
0thomblake
I agree - a definite improvement.
0RobinZ
In retrospect: yeah, that's the right place. Added.