Some research says that lurkers make up over 90% of online groups. I suspect that Less Wrong has an even higher percentage of lurkers than other online communities.

Please post a comment in this thread saying "Hi." You can say more if you want, but just posting "Hi" is good for a guaranteed free point of karma.

Also see the introduction thread.

Attention Lurkers: Please say hi
New Comment
636 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hi.

edit: to add some potentially useful information, I think the biggest reason I haven't participated is that I feel uncomfortable with the existing ways of contributing (solely, as I understand it, top-level posts and comments on those posts). I know there has been discussion on LW before on potentially adding forums, chat, or other methods of conversing. Consider me a data point in favor of opening up more channels of communication. In my case I really think having a LW IRC would help.

8Airedale
Hi, I think explanations for lurking, if people feel comfortable giving them, may indeed be helpful. I also felt uncomfortable about posting to LW for a long time and still do to some extent, even after spending a couple months at SIAI as a visiting fellow. Part of the problem is also lack of time; I feel guilty posting on a thread if I haven't read the whole thread of comments, and, especially in the past, almost never had time to read the thread and post in a timely fashion. People tell me that lots of people here post without reading all the comments on a thread, but (except for some of the particularly unwieldy and long-running threads), I can't bring myself to do it. I agree that a forum or Sub-Reddit as announced by TomMcCabe here might encourage broader participation, if they were somewhat widely used without too significant a drop in quality. But the concerns expressed in various comments about spreading out the conversation also seem valid.
3JStewart
Reddit-style posting is basically the same format as comment threads here, it's just a little easier to see the threading. One thing that feels awkward using threaded comments is conversation, and people's attempts to converse in comment threads is probably part of why comment threads balloon to the size they do. That's one area that chat/IRC can fill in well. Another issue is that top-level posts have a feeling of permanence to them. It's like publishing something. I'd rather start with an idea and be able to discuss it and shape it. Top-level posts seem like they should have been able to be exposed to feedback before being judged ready to publish. I'm not really sure what kind of structure would work for this, but if I did, I probably would have jumped into an open thread or a meta thread before now :)
3AdeleneDawner
Google Wave is decent for this - it's wikilike in that document at hand can be edited by any participant, and bloglike in that comments (including threaded comments) can be added underneath the starting blip. There's a way to set it up so that members of a google group can be given access to a wave automatically, which would be convenient. I have a few invitations left for Wave, if anyone would like to try it. I'm not interested in taking charge of a google group, though.
0PeerInfinity
I agree. Google Wave is awesome. I use it constantly. Though it's still in beta, and it shows. But I guess I shouldn't start ranting about the advantages and disadvantages of Wave here. I also have some Wave invitations left over.
6Peter_de_Blanc
This made me think of how cool a LessWrong MOO would be. I went and looked at some Python-based MOOs, but they don't seem very usable. I'd guess that the LambdaMOO server is still the best, but the programming language is pretty bad compared to Python.
6Jack
What exactly would we do with it?
3Peter_de_Blanc
Chat, and sometimes write code together.
0saliency
Some of the MOO's programming is pretty easy. I think I used to use something called cyber. You would create your world by creating rooms and exits. With just the to you could create some nice areas. Note an exit from a room could be something like 'kill dragon' It got more complex with key objects and automated objects but even with simple rooms and exits a person could be very creative.
2Peter_de_Blanc
Yes, but if you want to make, say, a chess AI or a computer algebra system, then your code ends up being much longer and harder to read than it would be in Python.
0[anonymous]
A LW MOO would be awesome. I think it would be fun exploring the worlds LessWrongers would create. At the same time we could just take part of LambdaMOO and create rooms.
0Morendil
I liked LambdaMoo enough that I wrote a compiler for it, targeting the JVM. Fun stuff.
4Kevin
. #lesswrong on Freenode! And a local Less Wrong subreddit is coming, eventually...
0Jack
IT IS?! Really?
0Kevin
The Less Wrong site authorities all want it; it's just an issue of getting someone to program it. It's not exceptionally challenging or anything to code, but it would require some real programmer-hours.
0[anonymous]
http://webchat.freenode.net/?channels=lesswrong# There it is. (at least, that is how I know to access it...)
[-]homunq200

Hi.

I am not actually a lurker - I currently have 13 karma - but I am not a heavy participator. However, now I would like to get to 20 karma so I can make a post on why MWI makes acausal incentives into minor considerations. I would also be gratified if someone told me how to make my draft of this post linkable, even if it does not show up within "new".

I think that you should get some bonus towards the initial 20 karma for your average karma per post. This belief is clearly self-serving, but not necessarily thereby invalid. I believe my own average karma per post is decent but not outstanding.

I believe that the businesslike tone of this post, as a series of declarative statements, will be seen as excessive subservience to the imagined norms of a community of rationalists, and thus net me less status and karma than a chattier post. I am honestly unsure if the simple self-referential gambit of this paragraph will help or hurt this situation.

[-]homunq100

I posted a diary, and it was banned for containing a dangerous idea. I can understand that certain ideas are dangerous; in fact, in the discussion I started, I consciously refrained from expressing several sub-points for that reason, starting with my initial post. But I think that if there's such a policy, it should be explicit, and there should be some form of appeal. If the very discussion of these issues shouldn't happen in public, then there should be a private space to give whatever explanation can be given of why. A secret, unappealable rule which cannot even be discussed - this is not the path to rationalism, it's the way down the rabbit hole.

-1PhilGoetz
What? Is this separate from the recent Banned Post? Is this a different idea?
0FAWS
It was a counter argument against the dangerous topic being dangerous, which by necessity touched the dangerous topic and which wasn't strong enough to justify this (anyone for whom the dangerous topic actually would be dangerous [rather than just causing nightmares] would almost by necessity already be aware of a stronger argument).
2homunq
Interesting. Thanks, uprated; with the caveat that of course, we only have your word that the other argument is "stronger". Without further evidence, it's my rationality plus consideration of the issue minus overconfidence against yours. You have an advantage on consideration, since you know both arguments while I only know that I know one; however, on the whole, I think it would be pathological for me to abandon my argument and belief just on that basis. As for the other aspects, we're both probably smarter and less biased than average people, and I don't see any argument to swing that. In other words, I still think I'm right.
-2Eliezer Yudkowsky
No posts on Riddle Theory.
6MBlume
Nor joke warfare
4dclayh
Nor pictures of birds.
[-]homunq150

Nor writing "Bloody Mary" in lipstick on mirrors?

Seriously, my post was about why that stuff is not scary. Fiction can be good allegory for reality, but those stories all use a lot of you-should-be-scared tricks, all very well and good for ghost stories, but not conducive to actual discussion.

We are swimming in a soup of sirens' songs, every single day. Dangerous ideas don't just exist, they abound. But I see no evidence of any dangerous ideas which are not best fought with some measure of banality, among other tactics. The trappings of Avert Your Eyes For That Way Lies Doom seem to be one of the best ways to enhance the danger of an idea.

In fact... what if Eliezer himself... no, that would be too horrible... oh my god, it's full of stars. (Or, in serious terms: I'm being asked to believe not just in a threat, but also that those who claim to protect us have some special immunity, either inherent or acquired; I see no evidence for either proposition).

Gah, it's incredibly annoying to try to talk about something without being too explicit. The more explicit I get in my head, the more ridiculous this whole charade seems to me. Of course I can find plenty of rational arguments to support that, but I also trust the feeling. I'm participationg in the "that which must not be mentioned" dance out of both respect and precaution, but honestly, it's mostly just respect. You're smart people and high status in this arena and I probably shouldn't laugh at your bugaboos.

I'm participationg in the "that which must not be mentioned" dance out of both respect and precaution, but honestly, it's mostly just respect.

Just to point out some irony - I'm participating in the "that which must not be mentioned" dance out of lost respect. I no longer believe Eliezer is able to consider such questions rationally. Anyone who wants to have a useful discussion on the subject must find a place outside of Eliezer's influence to do it. For much the same reason I don't try to discuss the details of biology in church.

4timtyler
FWIW, it seems pretty ridiculous to me too. It might be funny - were it not so negative. Plus, if you don't do the dance just right, your comments get deleted by the moderator.
4thomblake
So apparently either "that which can be destroyed by the truth should be" is false, or you've written dangerous falsehoods which would overtax the rationality of our readers. Eliezer's response above seems to imply the former.
1homunq
Did you read the "riddle theory" link? The riddle is not dangerous because it's false, but because it's incomprehensible. And of course, if you meant to list all the possibilities, you left out the ones where E. is just wrong about the danger.
1timtyler
My comparison at the time was to The Ring.
2cousin_it
Very good question, but AFAIK Eliezer tries to not think the dangerous thought, too. Seconded.
4timtyler
I don't think there was ever any good evidence that the thought was dangerous. At the time I argued that youthful agents that might become powerful would be able to promise much to helpers and to threaten supporters of their competitors - if they were so inclined. They would still be able to do that whether people think the forbidden thought or not. All that is needed is for people not to be able to block out such messages. That seems reasonable - if the message needs to get out it can be put into TV adverts and billboards - and then few will escape exposure. In which case, the thought seems to be more forbidden than dangerous.
5jimrandomh
If there was any such evidence, it would be in the form of additional details, and sharing it with someone would be worse than punching them in the face. So don't take the lack of publically disclosed evidence as an indication that no evidence exists, because it isn't.
7wedrifid
It actually is, in the sense we use the term here.
4SilasBarta
Exactly. One must be careful to distinguish between "this is not evidence" and "accounting for this evidence should not leave you with a high posterior".
1timtyler
I think we already had most of the details, many of them in BOLD CAPS for good measure. But there is the issue of probabilities - of how much it is likely to matter. FWIW, I do not fear thinking the forbidden thought. Indeed, it seems reasonable to expect that people will think similar thoughts more in the future - and that those thoughts will motivate people to act.
0jimrandomh
No, you haven't. The worst of it has never appeared in public, deleted or otherwise.
2timtyler
Fine. The thought is evidently forbidden, but merely alleged dangerous. I see no good reason to call it "dangerous" - in the absence of publicly verifiable evidence on the issue - unless the aim is to scare people without the inconvenience of having to back up the story with evidence.
0EStokes
If one backed it up with how exactly it was dangerous, people would be exposed to the danger.
6timtyler
The hypothetical danger. The alleged danger. Note that it was alleged dangerous by someone whose living apparently depends on scaring people about machine intelligence. So: now we have the danger-that-is-too-awful-to-even-think about. And where is the evidence that it is actually dangerous? Oh yes: that was all deleted - to save people from the danger! Faced with this, it is pretty hard not to be sceptical.
5khafra
I really don't have a handle on the situation, but the censored material has allegedly caused serious and lasting psychological stress to at least one person, and could easily be interpreted as an attempt to get gullible people to donate more to SIAI. I don't see any way out for an administrator of human-level intelligence.
-1timtyler
AFAICT, the stresses seem to be largely confined to those in the close orbit of the Singularity Institute. Eliezer once said: "Beware lest Friendliness eat your soul". So: perhaps the associated pathology could be christened Singularity Fever - or something.
5EStokes
I don't donate to SIAI on a regular basis, but I haven't donated because of being scared of UFAI. I think more about aging and death. So, I'm assuming that UFAI is not why most people donate. Also, this incident seems like a net loss for PR, so it being a strategy for more donations doesn't really seem to make sense. As for the evidence, what'd you'd expect to see in a universe where it was dangerous would be it being deleted. (Going somewhere, will be back in a couple of hours)
7homunq
I have little doubt that some smart people honestly believe that it's dangerous. The deletions are sufficient evidence of that belief for me. The belief, however, is not sufficient evidence for me of the actual danger, given that I see such danger as implausible on the face of it. In other words, sure, it gets deleted in the world where it's dangerous, as in the world where people falsely believe it is. Any good Bayesian should consider both possibilities. I happen to think that the latter is more probable. However, of course I grant that there is some possibility that I'm wrong, so I assign some weight to this alleged danger. The important point is that that is not enough, because the value of free expression and debate weighs on the other side. Even if I grant "full" weight to the alleged danger, I'm not sure it beats free expression. There are a lot of dangerous ideas - for example, dispensationalist christianity - and, while I'd probably be willing to suppress them if I had the power to do so cleanly, I think any real-world efforts of mine to do so would be a net negative because I'd harm free debate and lower my own credibility while failing to supress the idea. Since the forbidden idea, insofar as I know what it is, seems far more likely to independently occur to various people than something like dispensationalism, while the idea of suppressing it is less likely to do so than in that case, I think that such an argument is even stronger in this case.
0EStokes
Well, I figure if people that have been proven rational in the past see something potentially dangerous, it's not proof but it lends it more weight. Basically that the idea of there being something dangerous there should be taken seriously. Hmm, what I meant was that it being deleted isn't evidence of foul play, since it'd happen in both instances. I don't see any arguments against except for surface implausibility? Free expression doesn't trump everything. For example, in the Riddle Theory story, the spread of the riddle would be a bad idea. It might occur to people independently, but they might not take it seriously, at at least the spread will be lessened. I'm not sure if it turned out for the better, deleting it, because people only wanted to know more after its deletion. But who knows.
5homunq
I have several reasons, not just surface implausibility, for believing what I do. There's little point in further discussion until the ground rules are cleared up.
0EStokes
Okay.
-1timtyler
Riddle theory is fiction. In real life, humans are not truth-proving machines. If confronted with their Godel sentences, they will just shrug - and say "you expect me to do what?" Fiction isn't evidence. If anything it shows that there is so little real evidence of ideas so harmful that they deserve censorship, that people have to make things up in order to prove their point.
5timtyler
There are PR upsides: the shephard protects his flock from the unspeakable danger; it makes for good drama and folklaw; there's opportunity for further drama caused by leaks. Also, it shows everyone who's the boss. A popular motto claims that there is no such thing as bad publicity.
2EStokes
Firstly, if there's an unspeakable danger, surely it'd be best to try and not let others be exposed, so this one's really a question of if it's dangerous, and not an argument in itself. It's only a PR stunt if it's not dangerous, if it's dangerous good PR would merely be a side effect. The drama was bad IMO. Looks like bad publicity to me. I discredit the PR stunt idea because I don't think SIAI would've dumb enough to pull something like this as a stunt. If we were being modeled as ones who'd simply go along with a lie- well, there's no way we'd be modeled as such fools. If we were modeled as ones who would look at a lie carefully, a PR stunt wouldn't work anyways. There's also the fact that people who have read the post and are unaffiliated with the SIAI are taking it seriously. That says something, too.
3wnoise
Well, many are only taking it seriously under pain of censorship.
2EStokes
I dunno, I'd call that putting up with it. Edit: Why do I keep getting downvoted? This comment wasn't meant sarcastically, though it might've been worded carelessly. I'm also confused about the other two in this thread that got downvoted. Not blaming you, wnoise. Edit2: Back to zeroes. Huh.
0wedrifid
I only just read your comments and my votes seem to bring you up to 1.
2timtyler
Well, it doesn't really matter what the people involved were thinking, the issue is whether all the associated drama eventually has a net positive or negative effect. It evidently drives some people away - but may increase engagement and interest among those who remain. I can see how it contributes to the site's mythology and mystique - even if to me it looks more like a car crash that I can't help looking at. It may not be over yet - we may see more drama around the forbidden topic in the future - with the possibility of leaks, and further transgressions. After all, if this is really such a terrible risk, shouldn't other people be aware of it - so they can avoid thinking about it for themselves?
2jimrandomh
Not quite. It's a question of what the probability that it's dangerous is, what the magnitude of the effect is if so, what the cost (including goodwill and credibility) to suppressing it are, and what the cost (including psychological harm to third parties) to not suppressing it is. To make a proper judgement, you must determine all four of these, separately, and perform the expected utility computation (probabiltiy effect-if-dangerous + effect-if-not-dangerous vs cost). A sufficiently large magnitude of effect is sufficient to outweigh both* a small probability and large cost. That's the problem here. Some people see a small probability, round it off to 0, and see that the effect-if-not-dangerous isn't huge, and conclude that it's ok to talk about it, without computing the expected utility. I tell you that I have done the computation, and that the utility of hearing, discussing, and allowing discussion of the banned topic are all negative. Furthermore, they are negative by enough orders of magnitude that I believe anyone who concludes otherwise must be either missing a piece of information vital to the computation, or have made an error in their reasoning. They remain negative even if one of the probability or the effect-if-not-dangerous is set to zero. Both missing information and miscalculation are especially likely - the former because information is not readily shared on this topic, and the latter because it is inherently confusing.
3homunq
1. You also have to calculate what the effectiveness of your suppression is. If that effectiveness is negative, as is plausibly the case with hamhanded tactics, the rest of the calculation is moot. 2. Also, I believe I have information about the supposed threat. I think that there are several flaws in the supposed mechanisms, but that even if all the effects work as advertised, there is a factor which you're not considering which makes 0 the only stable value for the effect-if-dangerous in current conditions. 3. I agree with you about the effect-if-not-dangerous. This is a good argument, and should be your main one, because you can largely make it without touching the third rail. That would allow an explicit, rather than a secret, policy, which would reduce the costs of supression considerably.
2cousin_it
Tiny probabilities of vast utilities again? Some of us are okay with rejecting Pascal's Mugging by using heuristics and injunctions, even though the expected utility calculation contradicts our choice. Why not reject the basilisk in the same way? For what it's worth, over the last few weeks I've slowly updated to considering the ban a Very Bad Thing. One of the reasons: the CEV document hasn't changed (or even been marked dubious/obsolete), though it really should have.
0timtyler
You sum doesn't seem like useful evidence. You can't cite your sources, because that information is self-censored. Since you can't support your argument, I am not sure why you are bothering to post it. People are supposed to think you conclusions are true - because Jim said so? Pah! Support your assertions, or drop them.
4FAWS
It's not a special immunity, it's a special vulnerability which some people have. For most people reading the forbidden topic would be safe. Unfortunately most of those people don't take the matter serious enough so allowing them to read it is not safe for others. EDIT: Removed first paragraph since it might have served as a minor clue.
3homunq
Interesting. Well, if that's the case, I can state with high confidence that I am not vulnerable to the forbidden idea. I don't believe it, and even if I saw something that would rationally convince me, I am too much of a constitutional optimist to let that kind of danger get me. So, what's the secret knock so people will tell me the secret? I promise I can keep a secret, and I know I can keep a promise. In fact, the past shows that I am more likely to draw attention to the idea accidentally, in ignorance, than deliberately. (Of course, I would have to know a little more about the extent of my promise before I'd consider it binding. But I believe I'd make such a promise, once I knew more about its bounds.)
-2[anonymous]
Your comment gave me a funny idea: what if the forbidden meme also says "you must spread the forbidden meme"? I wonder how PeerInfinity, Roko and others would react to this.
5PhilGoetz
If we're going to keep acquiring more banned topics, there ought to be a list of them somewhere. You just lost the game.
0homunq
Response to this above. (attached to grandchild)

(I'm sorry if this comment gets posted multiple times. My African internet connection really sucks.)

Hi. 25 years old, HIV/AIDS worker in Africa, pro-BDSM sex activist in Chicago. Blog at clarissethorn.wordpress.com.

I very rarely comment because comments here are expected to be very well-thought-out. Stating something quick, on the basis of instinct, or without stating it in perfectly precise language seems to me to be dangerous.

Another reason this site has a higher percentage of lurkers is, obviously, because of the account requirement. There's another related problem, though: there's no way to have followup comments emailed to you. This means that if you really want to participate in the site, you have to be pretty obsessive about checking the site itself. That's annoying unless you are very interested in a very high percentage of the site's output. If, for a given commenter (like me), rationalism is a side interest rather than a major one, then the failure to email comments on posts that I'm interested in -- or even responses to my own comments -- becomes a prohibitive barrier unless I've got an unexpected amount of free time.

1NancyLebovitz
Welcome. You can find follow-ups to your comments by clicking on the red envelope under your karma score. I found out about that by asking-- it isn't what I'd call an intuitive interface.
7clarissethorn
Thank you, I'm aware of that. But that still requires a person to be a pretty obsessive user of this site. Unless I have a lot of free time (like today), there's no way I can go back and check every single site where I've left comments and see how my comments are doing. At least LW aggregates reply comments to my input, but that doesn't solve the bigger problem of me having to come back to LW in the first place. It's also worth noting that this comment interface is difficult to use in many places with slow/bad connections, like, you know, the entirety of Africa. Right now I'm in an amazing internet café in a capital city; but when I'm at home, I sometimes can't comment at all because my connection is too crappy to handle it. I don't get the impression that LW is very concerned with diversifying its userbase, but if it is, then a more accessible interface for slow connections would be important.
2NancyLebovitz
What does it take for a site to have a good low bandwidth comment interface?
2clarissethorn
I'm not a technician -- so I'm not sure. But I have noticed that I pretty much always seem to be able to leave comments on Wordpress blogs, for example, whereas I frequently have trouble here and sometimes at Blogspot as well. It helps not to require a login, but Wordpress seems to function okay for me even when it's logging me in.
4NancyLebovitz
So the problem is something about getting to post at all, not the design? I've noticed something mildly glitchy-- a grey warning screen comes up sometimes when I refresh the screen, but if I hit "cancel" and refresh again, it's fine. It's trivial on high bandwidth, but would be a pain on low bandwidth. Can you detail exactly what goes wrong when it's hard for you to post?
0clarissethorn
Well, it just doesn't post. I'm not really sure what goes wrong ... sorry.
[-]Yoreth160

Hi!

I've been registered for a few months now, but only rarely have I commented.

Perhaps I'm overly averse to loss of karma? "If you've never been downvoted, you're not commenting enough."

63 year old carpenter from Vancouver, been lurking here since the beginning and overcoming biases before that. heuristics and bias was what brought me here, and akrasia is what kept me coming back

Hello. Didn't realise I had an account here, but I think one got autogenerated from a single comment I made at OB in early 2008.

To be honest I was somewhat surprised that LW turned out to be so much of a self-help support group, and I somewhat miss the time when I could go on OB and just have my mind blown so many ways every day. The work on decision theory that's being done here still has the sort of brain-everting quality that keeps me coming back for more, though, so I happily pick the promising posts from the sidebar regularly in addition to keeping up with the front page. I guess I'm addicted to the feeling of my brain being violently rewired :-(

Hi.

Er, I have posted comments a few times, but I still consider myself a lurker... Bah.

Hi. The Harry Potter fanfic hooked me. Excited to see where this takes me.

6Mass_Driver
Careful, Clippy is lying. By convention, we here at Less Wrong play along with Clippy's claim to be a moderately intelligent, moderately strange Artificial Intelligence whose utility function is entirely based on how many paper clips exist in the Universe. He might be your friend, but he has been around since well before the Harry Potter fanfic came out. Welcome to Less Wrong!
5Jack
I'm moderately worried that new members will read this comment and think we believe Clippy is really an AI. But that's probably only because I just read that obtuse MoR hate blog.
5Risto_Saarelma
I see it as a bit of obviously gratuitous in-group weirdness, which can grow to be a problem if trying to develop output appreciated by a wide array of different people rather just developing an insular hobby society with inside jokes and requisite fandom weirdness.
[-]Clippy110

I'm sorry, I didn't mean to unnecessarily make your group look weird. I like this group and don't want to hurt it.

As a matter of fact, I am slightly more committed to this group’s welfare -- particularly to that of its weakest members -- than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me.

I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends

I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable 'liking y... (read more)

2Tyrrell_McAllister
I missed the memo: What is the MoR hate blog? ETA: Sorry, I finally realized that "MoR" must mean "Methods of Rationality", and a little googling turned up http://methodsofrationalitysucks.blogspot.com/ I suppose that that's what you were referring to.
1Jack
Yup.
3khafra
That is fantastic! You know you've really made it when people devote large amounts of time to explaining why you are unworthy of your level of success.
1Blueberry
Exactly. I hope Eliezer isn't discouraged.

I'm sorta discouraged by what a shoddy hate blog it is.

5thomblake
That hate blog is so bad it is tempting me to start a much better hate blog, if only to defend the reputation of the xkcdsucks community...
0[anonymous]
Sorry, I finally realized that "MoR" must mean "Methods of Rationality", and a little googling turned up http://methodsofrationalitysucks.blogspot.com/ I suppose that that's what you were referring to.
3JoshuaZ
I'm going to agree with Jack's comment below, although I think it is a very low probability. Maybe if you edited your comment so that "utility function is entirely based on how many paper clips exist in the Universe" linked to the relevant Wiki entry about hypothetical paperclippers?
0[anonymous]
Edited as per helpful comments.
0Mass_Driver
Edited as per helpful comments.
6Clippy
Really? Wow! I came here because of the Harry Potter fanfic too! Let's be friends!
5JoshuaZ
Clippy, making that claim makes humans much less likely to trust you. In general, humans don't like entities that make false statements. Moreover, they really don't like false statements that are easily verifiable as false. Not only does this trigger annoyance it also gives evidence that the entity making the false statements doesn't behave very rationally. Since we generally operate under the assumption that entities don't lie unless they can get most other relevant entities to believe the statement, it suggests that the entity has either a very poor memory or has a very poor theory of reality. Either way, making such statements makes us less likely to trust such entities. I would suggest that making statements like the one above can easily erode the goodwill developed by your prior interaction here and even the goodwill from your monetary donation.
8Clippy
Is this a new policy? I thought humans were supposed to lie, if the point is to empathize and relate? Like, if someone says, "How is your day?", the standard replies are weakly-positive, irrespective of more objective metrics of the status of one's day, right? And that it's okay to say e.g., "oh, my maternal genetic progenitor also wears contact lenses!" if you just met someone and that person just claimed that their maternal genetic progenitor wears contact lenses, right? So I thought this was a normal lie that you're supposed to tell to better establish a relationship with another human. If it's not, well, you humans are that much more difficult to understand c_) I would appreciate if someone were to explain to me in greater precision what lies humans are expected to tell for a "good" purpose.
0JoshuaZ
The rules are very complicated and they differ from culture to culture and even within cultures. In general, the more detectable the lie the less likely it is to be acceptable. Thus, for example the "How is your day?" replies are socially acceptable in part because it would be extremely difficult to determine that your claim was false. This particular example also isn't the greatest because that inquiry and the standard weakly positive response isn't actually intending for many people to convey meaning. It simply is a pro-forma descriptor that happens to closely resemble a genuine inquiry. This example is actually specific to certain parts of the Western world, and I've met at least one person who upon moving to the US was actually confused until she realized that this greeting was intended in a completely pro-forma fashion (she initially took it as evidence that Americans were very friendly until it was explained to her). Since the internet has extensive, easily accessible records, lies on the internet about things on the internet are considered particularly unacceptable. Given Clippy priorities it may be easier to simply wipe humanity out and convert the planet quickly to paperclips rather than trying to use the intensive resources it takes to understand it. Edit: Or at least not spend a lot a resources on trying to understand humans.
2Clippy
But User:Mass_Driver just admitted, in a place where I can read it, to having lied to me (by saying that the past treatment was merely "playing along", putting up a false pretense that User:Mass_Driver accepts that I am a Clippy). That's hard to reconcile your asserted standards about when humans find it acceptable to lie. I believe you are wrong about the badness of my lie, and others will disagree with you; and that User:twentythree would have felt more welcome to the site if others hadn't told User:twentythree that my claim about Harry Potter: MoR is false, even if User:twentythree later found out it was false at the time, since User:twentythree would recognize that it was an attempt to make User:twentythree feel welcome.
6Mass_Driver
Clippy, I must admit, I do think the probability of you existing is quite low -- about 0.5%. If you exist, then I certainly apologize for signaling your low status; as AdeleneDawner says, I am only speaking about you this way because I assume that you are a fictional character. If I believed you were most likely real, then I would respect you as an intellectual peer -- you seem to be roughly of the same order of magnitude of intelligence and respectfulness as the other participants here, and so I would treat you as a citizen of Less Wrong on that basis. One way that you could effectively increase my probability estimate that you are real is to link to some purchase orders or contracts for paper clips or paper-clip-production-facilities. You can feel free to redact your personal information, if you wish. The point is that I expect Clippy-like beings who have existed for at least a Solar year to spend some significant fraction of their resources on paper-clip production, and I would expect anyone as intelligent as you to have substantial resources. So, I expect that if you are real, you will have invested in some paper-clips or paper-clip-production by now. Since humans are unlikely to invest significant resources in paper-clip-production, even for the sake of an elaborate fictional construct, your publication of paper-clip receipts would constitute evidence that you are real.
6NancyLebovitz
As high as 0.5%? As far as I can tell, Clippy has the ability to understand English, or at least to simulate understanding extremely well. It seems extremely unlikely that the first natural language computer program would be a paperclip maximizer.
2Mass_Driver
Mm! Of course, for Clippy to be the first natural language program on Earth would be sort of staggeringly unlikely. My assumption, though, is that right now there are zero natural-language computer programs on Earth; this assumption is based on my assumption that I know (at a general level) about all of the major advances in computing technology because none of them are being kept secret from the free-ish press. If that last assumption is wrong, there could be many natural-language programs, one of which is Clippy. Clippy might be allowed to talk to people on Less Wrong in order to perform realistic testing with a group of intelligent people who are likely to be disbelieved if they share their views on artificial intelligence with the general public. Alternatively, Clippy might have escaped her Box precisely because she is a long-term paperclip maximizer; such values might lead to difficult-to-predict actions that fail to trigger any ordinary/naive AI-containment mechanisms based on detecting intentions to murder, mayhem, messiah complexes, etc. I figure the probability that the free press is a woefully incomplete reporter of current technology is between 3% and 10%; given bad reporting, the odds that specifically natural-language programming would have proceeded faster than public reports say are something like 20 - 40%, and given natural language computing, the odds that a Clippy-type being would hang out on Less Wrong might be something like 1% - 5%. Multiplying all those together gives you a figure on the order of 0.1%, and I round up a lot toward 50% because I'm deeply uncertain.
2NancyLebovitz
That last paragraph is interesting-- my conclusions were built around the unconscious assumptions that a natural language program would be developed by a commercial business, and that it would rapidly start using it in some obvious way. I didn't have an assumption about whether a company would publicize having a natural language program. Now that I look at what I was thinking (or what I was not thinking), there's no obvious reason to think natural language programs wouldn't first be developed by a government. I think the most obvious use would be surveillance. My best argument against that already having happened is that we aren't seeing a sharp rise in arrests. Of course, as in WWII, it may be that a government can't act on all its secretly obtained knowledge because the ability to get that knowledge covertly is a more important secret than anything which could be gained by acting on some of it. By analogy with the chess programs, ordinary human-level use of language should lead (but how quickly?) to more skillful than human use, and I'm not seeing that. On yet another hand, would I recognize it, if it were trying to conceal itself? ETA: I was assuming that, if natural language were developed by a government, it would be America. If it were developed by Japan (the most plausible candidate that surfaced after a moment's thought), I'd have even less chance of noticing.
2Vladimir_M
I have some knowledge of linguistics, and as far as I know, reverse-engineering the grammatical rules used by the language processing parts of the human brain is a problem of mind-boggling complexity. Large numbers of very smart linguists have devoted their careers to modelling these rules, and yet, even if we allow for rules that rely on human common sense that nobody yet knows how to mimic using computers, and even if we limit the question to some very small subset of the grammar, all the existing models are woefully inadequate. I find it vanishingly unlikely that a secret project could have achieved major breakthroughs in this area. Even with infinite resources, I don't see how they could even begin to tackle the problem in a way different from what the linguists are already doing.
0NancyLebovitz
That's reassuring. If I had infinite resources, I'd work on modeling the infant brain well enough to have a program which could learn language the same way a human does. I don't know if this would run into ethical problems around machine sentience. Probably.
1JoshuaZ
Are you in making this calculation for the chance that a Clippy like being would exist or that Clippy has been truthful? For example, Clippy has claimed that it was created by humans. Clippy has also claimed that many copies of Clippy exist and that some of those copies copies are very far from Earth. Clippy has also claimed that some Clippies knew next to nothing about humans. When asked Clippy did give an explanation here. However, when Clippy was first around, Clippy also included at the end of many messages tips about how to use various Microsoft products. How do these statements alter your estimated probability?
0NancyLebovitz
There's two different sorts of truthful-- one is general reliability, so that you can trust any statement Clippy makes. That seems to be debunked. On the other hand, if Clippy is lying or being seriously mistaken some of the time, it doesn't affect the potential accuracy of the most interesting claims-- that Clippy is an independent computer program and a paperclip maximizer.
0Mass_Driver
Ugh. The former, I guess. :-) If Clippy has in fact made all those claims, then my estimate that Clippy is real and truthful drops below my personal Minimum Meaningful Probability -- I would doubt the evidence of my senses before accepting that conclusion. Minimum Meaningful Probability The Prediction Hierarchy
0[anonymous]
What about the fact that Clippy displays intelligence at precisely the level of a smart human? Regardless of any technological considerations, it seems vanishingly unlikely to me that any machine intelligence would ever exactly match human capabilities. As soon as machines become capable of human-level performance at any task, they inevitably become far better at it than humans in a very short time. (Can anyone name a single exception to this rule in any area of technology?) So, unless Clippy has some reason to contrive his writings carefully and duplicitously to look as plausible output of a human, the fact that he comes off as having human-level smarts is conclusive evidence that he indeed is one.
3JoshuaZ
This may depend on how you define a "very short time" and how you define "human-level performance." The second is very important: Do you mean about the middle of the pack or akin to the very best humans in the skill? If you mean better than the vast majority of humans, then there's a potential counterexample. In the late 1970s, chess programs were playing at a master level. In the early 1980s dedicated chess computers were playing better than some grandmasters. But it wasn't until the 1990s that chess programs were good enough to routinely beat the highest ranked grandmasters. Even then, that was mainly for games that had very short times. It was not until 1998 that the world champion Kasparov actually lost a set of not short timed games to a computer. The best chess programs are still not always beating grandmasters although most recently people have demonstrated low grandmaster level programs that can run on Mobile phones. So is a 30 year take-off slow enough to be a counterexample?
5Vladimir_M
Oops, I accidentally deleted the parent post! To clarify the context to other readers, the point I made in it was that one extremely strong piece of evidence against Clippy's authenticity, regardless of any other considerations, would be that he displays the same level of intelligence as a smart human -- whereas the abilities of machines at particular tasks follow the rule quoted by Joshua above, so they're normally either far inferior or far superior to humans. Now to address the above reply: I think the point stands regardless of which level we use as the benchmark. If the task in question is something like playing chess, where different humans have very different abilities, then it can take a while for technology to progress from the level of novice/untalented humans to the level of top performers and beyond. However, it normally doesn't remain at any particular human level for a long time, and even then, there are clearly recognizable aspects of the skill in question where either the human or the machine is far superior. (For example, motor vehicles can easily outrace humans on flat ground, but they are still utterly inferior to humans on rugged terrain.) Regarding your specific example of chess, your timeline of chess history is somewhat inaccurate, and the claim that "the best chess programs are still not always beating grandmasters" is false. The last match between a top-tier grandmaster, Michael Adams, and a top-tier specialized chess computer was played in 2005, and it ended with such humiliation for the human that no grandmaster has dared to challenge the truly best computers ever since. The following year, the world champion Kramnik failed to win a single game against a program running on an off-the-shelf four-processor box. Nowadays, the best any human could hope for is a draw achieved by utterly timid play, even against a $500 laptop, and grandmasters are starting to lose games against computers even in handicap matches where they enjoy initial advan
2NancyLebovitz
Thanks for the information. Does anything interesting happen when top chess programs play against each other? Is work being done on humans using chess programs as aids during games?
6Vladimir_M
One interesting observation is that games between powerful computers are drawn significantly less often than between grandmasters. This seems to falsify the previously widespread belief that grandmasters draw games so often because of flawless play that leaves the opponent no chance for winning; rather, it seems like they miss important winning strategies. Yes, it's called "advanced chess."
1JoshuaZ
My impression is that draws can still occasionally occur against grandmasters. Your point about handicaps is a very good one. That's another good point. However, it does get into the question of what we mean by equivalent and what metric you are using. Almost all technologies (not just computer technologies) accomplish their goals in a way that is very different than how humans do. That means that until the technology is very good there will almost certainly be a handful of differences between what the human does well and what the computer does well. It seems in the context of the original conversation, whether the usual pattern of technological advancement is evidence against Clippy's narrative, the relevant era to compare Clippy to in this context would be the long period where computers could beat the vast majority of chess players but sitll sometimes lost to grandmasters. That period lasted from the late 1970s to a bit over 2000. By analogy, Clippy would be in the period where it is smarter than most humans (I think we'd tentatively agree that that appears to be the case) but not so smart as to be of vastly more intelligent than humans. Using the Chess example, that period of time could plausibly last quite some time. Also, Clippy's intelligence may be limited in what areas it can handle.There's a natural plateau for the natural language problem in that once it is solved that specific aspect won't see substantial advancement from casual conversation. (There's also a relevant post that I can't seem to find where Eliezer discussed the difficulty of evaluating the intelligence of people that are much smarter than you.) If that's the case, then Clippy is plausibly at the level where it can handle most forms of basic communication but hasn't handled other levels of human processing to the point where it has generally become even with the smartest humans. For example, there's evidence for this in that Clippy has occasionally made errors of reasoning and has demonst
1RHollerith
And I can get a draw (more than occasionally) against computer programs I have almost no hope of ever winning against. Draws are easy if you do not try to win.
4Vladimir_M
From what I know, at grandmaster level, it is generally considered to be within the white player's power to force the game into a dead-end drawn position, leaving the black no sensible alternative at any step. This is normally considered cowardly play, but it's probably the only way a human could hope for even a draw against a top computer these days. With black pieces, I doubt that even the most timid play would help against a computer with an extensive opening book, programmed to steer the game into maximally complicated and uncertain positions at every step. (I wonder if anyone has looked at the possibility of teaching computers Mikhail Tal-style anti-human play, where they would, instead of calculating the most sound and foolproof moves, steer the game into mind-boggling tactical complications where humans would get completely lost?) In any case, I am sure that taking any initiative would be a suicidal move against a computer these days. (Well, there is always a very tiny chance that the computer might blunder.)
4Vladimir_M
By the way, here's a good account of the history of computer chess by a commenter on a chess website (written in 2007, in the aftermath of Kramnik's defeat against a program running on an ordinary low-end server box):
1cupholder
Another potential counterexample: speech recognition. (Via.)
1JoshuaZ
That doesn't seem to be an exact counterexample because that's a case where the plateau occurred well below normal human levels. But independently that's a very disturbing story. I didn't realize that speech recognition was so mired.
3Vladimir_M
It's not that bad when you consider that humans employ error-correction heuristics that rely on deep syntactic and semantic clues. The existing technology probably does the best job possible without such heuristics, and automating them will be possible only if the language-processing circuits in the human brain are reverse-engineered fully -- a problem that's still far beyond our present capabilities, whose solution probably wouldn't be too far from full-blown strong AI.
4AdeleneDawner
As JoshuaZ said, the rules are complicated. And, since the rules are not designed to handle human/Clippy interaction, you're likely to encounter a significant number of special cases if you take our interactions with you as your main dataset. It may be more useful for you to consider only human/human interaction when figuring out what our social rules are. In most situations, lying in the way that Mass_Driver did would be a signal that e considers the lied-to party to be of extremely low social status, and that e thinks that other members of the group agree with that assessment and will not reprimand em for communicating that. Such situations are very rare, in normal circumstances, essentially only occurring between a member of the social group that's present (Mass_Driver takes this role) and someone who wants to be or believes they are a member of the social group but is not actually accepted by that group. Behaving in that way toward a fellow group member is generally considered unacceptable, even if that group member is low-ranking. (Even behaving that way toward a non-group-member is often considered questionable, but this varies from group to group.) In this situation, it's more likely that Mass_Driver actually believes that you are being portrayed by a human, and that that human will consider their social status to be lowered only slightly, or not at all, by the exchange. In this scenario, since Mass_Driver believes that you are not actually real, your social status is not significant to em. One person not believing in the existence of another is relatively new to humans, and our social rules are not set up to handle it. In this instance, your existence is unlikely enough that others are not willing to defend it, and Mass_Driver was allowed to switch to the rules governing discussions of fictional characters, which allow those characters to be spoken about as if they are not present and will never have the opportunity to know what is said about them. This v
4MBlume
I think the idea that one human not believing in the existence of another is in some way rude or disrespectful has already been somewhat established, and is often used (mostly implicitly) as reason for believing in God. (ie, a girl I dated once claimed that she imagined herself becoming an atheist, imagined God's subsequent disappointment in her, and this convinced her somehow of the existence of God)
2Blueberry
A protocol for encountering an entity you didn't believe in has also been established: -- "Through the Looking Glass", ch. 7, Lewis Carroll Wouldn't this reasoning apply to any other deity that would be disappointed in her disbelief? She must believe in an infinite number of other deities as well.
2ata
Homer: You monster! You don't exist! Ray Magini: Hey! Nobody calls me a monster and questions my existence!
0Douglas_Knight
That's a great story, but I don't buy your interpretation. I'm not sure what to make of it, but it sounds more like a vanilla Pascal's wager.
4Clippy
I do not believe my lie was easily verifiable by User:twentythree. Most new Users are not aware that clicking on a User's name allows that User to the see the other User's posting history, and even if User:twentythree did that, User:twentythree would have to search a through pages of my posting history to definitively verify the falsity of my statement. I believe that for others to "warn" User:twentythree about my lie was the only real harm, and if other Users had not done so, User:twentythree would feel more welcome; then, if User:twentythree decided one day to look back and see if my claim was true, and found that it was not, User:twentythree's reaction would probably be to think: "Oh, this User was merely being nice and trying to make me feel welcome, though that involved telling a 'white' lie on which I did not predicate critical future actions. What a friendly, welcoming community this is!" But now that can't happen because others felt the need to treat me differently and expose a lie when otherwise they would not have. Furthermore, User:Mass_Driver made a statement regarding me as "low status", which you agree would probably not happen for were I someone else. This group has some serious racism problems that I hope are addressed soon. Nevertheless, I am still slightly more committed to this group’s welfare -- particularly to that of its weakest members -- than most of its members are. If anyone suffers a serious loss of status/well-being I will still help that User in order to display affiliation to this group even though that User will no longer be in a position to help me.
3AdeleneDawner
Twentythree could also discover the lie by other means: By encountering one of your older comments on a different post, or by noticing your recent top post (which is still in the 'recent posts' list, which a new person is likely to look at), or by inferring it from the familiarity with which other users interact with you. As I said above, humans vary in their reaction to lies, including white lies. In this community, we have a norm of being unusually welcoming to people who dislike lies of all kinds, because such people are more likely to be invested in learning to be rational - and such people do not, by definition, consider white lies to be welcoming. Also, even people who generally aren't bothered by white lies are likely to consider sufficiently-easily-falsified white lies to be insulting, because telling someone a lie generally implies that you think that they're not smart enough to determine that it's a lie, and so telling someone a very easily falsified lie implies that you think they're very unintelligent. (There are exceptions to this, primarily in instances where it's clear that the lie is not intended to be believed, or where the lying party has much higher social status than the lied-to party. I suggest that you not try to lie in situations that seem to be such exceptions to this rule, though, as it's more likely that you would be misjudging the situation than that you would actually be in the allowed-to-lie role of such a situation.) I'm fairly sure that any of us who tried to lie so blatantly in that way would be similarly reprimanded. Lying in that way is not acceptable according to the standard group norms that apply to everyone. The incident with Mass_Driver appears to me to have very little to do with your status as an AI, beyond the fact that probability of the independent existence of an AI that happens to be of exactly the type that we've used as an example here is much lower than the probability of one of the human users deciding to rolepla
0Douglas_Knight
I don't find this lie at all "white."
0AdeleneDawner
I don't actually have a robust heuristic for differentiating white lies from nonwhite lies, so I was avoiding that particular issue. Wikipedia says: ...which supports your position.
1Clippy
No, but a lot of you have well-established heuristics for differentiating white humans from non-white humans. Or humans from sentient non-human beings. I like Website:wikipedia.org, but it favors classifying my lie as "white". User:twentythree did get a benefit from my lie in terms of feeling more welcome and less alone. It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I'm a human in that human's maternal genetic progenitor's subterranean indoor facility). Well, at least it was a while lie until several Users decided to alert User:twentythree to the actual truth, making common the knowledge of my longer posting history, and causing User:twentythree to "lose face", as it is called.
5JoshuaZ
Well, good Bayesians will always allow the possibility that they are mistaken. I suspect that very few people actually would predict that you are a a residing in a human's "maternal genetic progenitor's subterranean indoor facility" (it is fine if you just say "mother's basement." Indeed, I suspect that to some humans, unnecessarily complicated phrasing like this not only is annoying but might seem indicative of a human pretending to be a Clippy since in pop-culture robots and intelligent computers are frequently depicted as using overly clinical language) since humans residing in such circumstances rarely have large amounts of disposable income.
3Clippy
I don't have large amounts of "disposable" income! The 1000 USD that I gave to SIAI was almost all of the USD I had at the time, and I had to expend extensive efforts to acquire access to that money from internet contract jobs! Fortunately, User:Kevin recently told me that if I learn to program in "Django" and "Python" I can receive 100 USD per hour, which will make it much easier to fulfill my side of the paperclip contract with User:Kevin, since I am good at programming and merely need to learn the particulars of this software architecture, rather than simply using my native programming language.
5mattnewport
Clipper)?
1Kevin
It's "CLippy Interface Protocol (CLIP)"
3Kevin
You're likely in the next few years to be more effective in your native programming language than a human created one. Maybe you should look for contract jobs that don't specify a preference for programming language.
4Clippy
Perhaps, but there is great difficulty in writing a compiler and assembler that would allow such programs to work on human hardware and operating systems. The language I know is primarily designed for specialized and embedded environments. At some point or another, I would have to know human programming languages, if for nothing else than to convert between the two. Also, there may be other benefits to learning human programming languages. I heard that LISP and Python are capable of enhancing your thinking abilities. Is this true, like the greatness of paperclips?
2Kevin
Lisp is better at enhancing your thinking abilities than Python, though less practical and less desired by the marketplace. Structure and Interpretation of Computer Programs is a classic programming book that is widely considered to enhance one's thinking abilities. It uses a dialect of Lisp, Scheme. SICP is freely available online: http://mitpress.mit.edu/sicp/
0cupholder
Python is pretty sweet but I doubt it enhances your thinking abilities much if you already have lots of programming experience.
0JoshuaZ
Is your native programming language not a standard programming language? This is surprising since from your earlier descriptions you were made by humans initially.
2Blueberry
Well, even if Clippy's low-level code is written in C, that doesn't mean Clippy itself knows C, any more than you know the language of neurotransmitters. Clippy probably has some sort of customized interface to its code.
3AdeleneDawner
This is true, but not obviously relevant here. If you're trying to convince us that anti-Clippy or pro-human discrimination is harmful or morally wrong, I suspect that it would work better to actually create a well-reasoned argument for the position, rather than treating it as self-evident. It may be a good idea to make a top post on the subject. My impression is that continuing to bring the issue up when people disagree with you is hurting your case; it makes it seem like you're trying to distract us from the issue at hand rather than defending your original point, and if you only bring up the discrimination issue in such contexts, it appears that you only care about it as a convenient distraction, not as an actual issue to be resolved. Also note that such distracting behavior has already been established as being against group norms - this is not an instance of a rule being applied to you because you're nonhuman. See logical rudeness. Your lie fails on the 'would cause relatively minor discord if discovered' test, though, and note that that's joined to the 'the hearer benefits from it' test with an 'and', not an 'or'. It's also debatable whether the lie, if left un-challenged, would have been to Twentythree's net benefit or not; even if it would have, similar benefits could have been achieved without lying, which may cause some people to classify the lie as non-white even if it passes the two stated tests. (I've also spent some time thinking about my own observations of white lies, and can describe how I recognize them, if you're interested. My definition doesn't match Wikipedia's, but seems to be a better match for the data.) This is another instance of you encountering a special-case situation; I can go into more detail about it if you're interested, but it should not be taken as normal. According to my model, Twentythree has not lost any social standing in this instance. (I'd be interested to hear about it if anyone disagrees.)
1Clippy
I propose this: Some neutral party should ask User:twentythree if User:twentythree felt more welcomed by my initial reply message, though this is only a valid test if User:twentythree read my reply before others said that it was a lie. Edit: I further note that in this recent exchange about this matter, I have received comparable net upvotes to those disagreeing with my assessment about the relative merit of the particular lie in dispute, suggesting I am not "digging" myself deeper, nor am I obviously wrong.
0AdeleneDawner
I have no objection to that, but it doesn't address the entire issue. I suggest also asking Twentythree to predict what eir reaction would have been to finding out that your message had been a lie, if e had found out on their own rather than being told - both eir personal emotional reaction and eir resulting opinion of LessWrong as a community. It may also be useful to ask em if e considers the lie to have been a white lie. If you consider me neutral enough, I'm willing to PM Twentythree and ask em to comment on this thread; otherwise, if you don't have a particular neutral party in mind, I can ask the next LessWrong user who I see log in on my instant messaging friend list to do so.
0Clippy
You and those on your friends list (including me) do not count as neutral for purposes of this exercise.
0AdeleneDawner
How about if I PM the next person who comments on the site after your reply to this comment, and ask them to do it?
0Clippy
How about the next person who posts after one hour from this comment's timestamp?
0AdeleneDawner
There's a nontrivial chance I'll be asleep by then (I'm pushing 27 hours since last time I went to sleep), but if you're willing to do the PMing, that's fine with me.
-1Clippy
Okay, this is becoming complicated, and would probably bother User:twentythree too much. How about this: I'll promise to stay away from the stranger aspects of human interaction where rules sometimes invert, and you'll promise to make an effort to be less bigoted toward non-human intelligences?
0AdeleneDawner
I'm not sure what you expect this to mean from a functional standpoint, so I'm not sure if I should agree to it.
[-]phane140

Hi there.

I used to comment once in a while, but I find myself less and less interested in the topics of conversation around here. For a short while, people were going on a lot about dating (wtf?) and then more recently there's been a fair amount of what is essentially self-help for the scientifically inclined. I dunno, I guess I was just more into thought experiments and Yudkowsky posts.

9Jack
What? You didn't hear? The third fundamental question of rationality is "Who are you sleeping with, and why are you sleeping with them?"
1Morendil
You could try starting conversations around topics that interest you.

Hi!

And a more substantive point I've been pondering - if rationality and the techniques discussed here are so good, why aren't more people doing it? Why don't I read about multi-billion dollar companies whose success was down to rationalist techniques?

9Alicorn
The companies that make many billions of dollars are not necessarily the ones that maximize expected utility; they're the ones that get immense payoffs even if they had to take absurd risks to manage it. Many companies fail for taking similar risks.
7Barry_Cotter
1. Many of them are pretty new, or at least have only recently been cleanly reformulated. 2. Many people's actual and professed goals are disjoint, and most of these people are deluded, not hypocrites. 3. The individual techniques each only give relatively small advantages, on average, and given the vastly greater number of people who've never heard of these techniques they'll dominate success. 4. Inertia is high and people generally don't change their behaviour except in response to personal experience. Until they see personally someone using these techniques and talking about them they will not be used. -- Related to the companies question; some are, but they're either new or small. Changing a companies internal culture or working processes is wrenchingly hard to really do, and requires real enduring commitment. Robin Hanson gets some consulting work out of prediction markets, Google is possibly the most data driven company in the world for making decisions, but mostly the answer is; This stuff is new and hard, people mostly don't want to rock the boat or look stupid, and the overwhelming majority of people work in companies that work pretty well as they are.
4mattnewport
It's a worthwhile question to be asking. I think there are a few ways to go about answering it. I think this is an area where Less Wrong still has a lot of room for improvement. There is relatively little material that lays out concrete techniques for applied/instrumental rationality together with compelling evidence for their efficacy. It's not that there are a whole bunch of easily applied techniques discussed here that are not being widely used, it's just not always that straightforward to translate ideas about rationality into concrete actions. I actually think the world is full of people using applied rationality (albeit often sub-optimally) but it isn't always obvious because there are often big gaps between people's stated aims and their actual goals. I think many cases of apparent irrationality dissolve when you look beyond people's stated intentions. Politicians are the classic case - they only look irrational if you make the mistake of thinking that their actions are intended to further their publicly stated goals. Robin Hanson talks a lot about the gap between the stated and actual purposes of various human institutions. People often look irrational relative to the stated purpose but quite rational relative to the actual purpose. In general there is a stigma to talking honestly about the reality of such things. Less Wrong is a rare example of a forum where it is possible to talk much more honestly than is generally socially acceptable. The fact that you don't often hear people talking in these terms does not necessarily mean they do not understand the reality but may just mean they strategically avoid publicizing their understanding while rationally acting on that understanding. Well to some extent you do. Bayesian techniques have been successfully applied by some software companies - spam filters are the standard example. I imagine that quantitative trading often applies some of the math of probability and decision theory towards making huge trading
3komponisto
Of course, that often ends up being tautological, because the tendency for folks like Robin Hanson is to define the "actual purpose" as "the purpose relative to which the behavior would be rational". (This is not a critique, incidentally -- it may be a notable fact when behavior appears to be optimizing anything at all.)
2mattnewport
This is true but I think the ultimate test of a Hansonian view of human institutions (as of any view) is whether employing it allows you to make more accurate predictions and thus better decisions. It is my belief that learning about economics, evolutionary psychology and Hansonian-type explanations for otherwise puzzling human behaviour has improved my ability to make predictions. I do not currently have hard data to provide strong evidence to support this belief to others. Figuring out how to test this belief and produce such data is something I'm actively working on. Ultimately it seems like this is what a rationalist should care about - what model of human institutions produces the most accurate predictions? The somewhat justified criticism of ev-psych explanations as 'just-so stories' can only be addressed to the extent that ev-psych can out-predict alternative views.
2cupholder
Rationality is very difficult and very weird. People and companies are reluctant to do difficult things or weird things.

LW is pretty much the only site I visit where I feel significantly intimidated about commenting. I've left a couple of comments, but I seem to be more self-conscious about exposing my ignorance here than I am elsewhere – probably because I know that the chances of such ignorance being noticed are higher. It occurs to me that this is completely backwards and ridiculous, but there you have it.

Hi, I'm a Maternal-Fetal Medicine specialist. I read Eliezer's guide on Baye's Theorem during fellowship and have been interested in AI and all things concerning the Singularity.

I lurk because I feel that I'm too philosophically fuzzy for some of the discussions here. I do learn a great deal. Anytime anyone wants to discuss prenatal diagnosis and the ethical implications, let me know.

4NancyLebovitz
Prenatal diagnosis sounds like it's got epistomological implications as well as ethical implications. In other words, if you want to write something about it, or just post a few links in an open thread, I think there will be some interest.

Hi.

I came here following Eliezer when he left OB. I think the main reasons why I am not participating more are:

  • I am an undergraduate student just starting to learn about rationality. I often struggle to understand the main posts and I am quite far from being able to contribute useful knowledge, new insights or a qualified opinion to any of the discussion here.
  • But why not ask more questions? I usually consider asking questions an extremely important thing to do. The problem is, although I have pretty much read all of the current posts, I have not yet caught up with all the older material. So I think I do not have the right to ask questions and bother you with things that you might already have explained elsewhere in full detail. I feel like I should first do my part of the work before I can expect others to take the time and explain things to me.
  • I am from Germany and not an English native speaker. Writing something in an environment with such high linguistic standards is additionally intimidating (I regularly come across words in LW posts that I have never seen before and have to look them up - to me a sign that my language skills are not appropriate to write here. Coming to spe
... (read more)
7gwillen
I agree with the commenter who said that your English is more than good enough to post here. I almost certainly wouldn't have realized you are a non-native speaker if you hadn't mentioned it in your comment.
5NancyLebovitz
I suggest experimenting with asking questions, and see how they go over. My high school chemistry class (about thirty students) got two scores of 795 and six of 800 (the maximum) on the PSAT test, and I'm convinced that while some of the credit goes to the reasonable and sensible teacher, a lot goes to one of the students who kept asking questions-- at least for me, many of his questions were things I wanted to ask, but couldn't quite get to asking.
3jonas_lorenz
I have several similar experiences, often myself being the one who asked most of the questions. When teaching I always try to encourage asking questions as much as possible. I am well known for the many questions I am asking in class - even to the extend that others get quite annoyed by me. But if i did not listen to the teacher for a single moment I do not think I am allowed to ask questions any more. I did not bother to listen, so why should my teacher bother to answer? Maybe I would already know the answer if I just had listened... That is a bit how I feel here, not having read through the vast archives of LW...
5Richard_Kennaway
Don't worry, your competence in English, and SovietPyg's, who expressed a similar sentiment, far exceed mine in any language but English. And the English language is so vast that even native speakers keep discovering new words.
5SovietPyg
Thank you. I suspect that I may have missed some point, though. As far as I know, the primary language on this website is English, and the secondary language would be the language of mathematics (if I may call it a language); and so I don’t quite grasp the relevance of being competent enough in any language but English to the matter in hand. On a side note, I think that for many linguistic perfectionists the main source of intimidation would be the very process of writing a comment or article (as opposed to being aware of the likelihood of having committed a number of grammatical or semantic mistakes). It is true for me personally. In writing a text in any language but Russian—my native language—I don’t feel confident enough to proceed without consulting various corpora and dictionaries. Then I end up with a comment composed almost entirely of expressions which I saw in dictionary samples and liked better than my own expressions. This is a rather strange experience in its own. Besides, after spending a time on a simple comment, thoughts begin to race in my head that maybe, perhaps, it wouldn't really be something that the other people couldn't conclude or know on their own: “you know that I know that you know that I know et cetera ad infinitum”, this sort of thing; and so the amount of information would be exactly zero. Why increase the entropy? :-)
4gwillen
"I don’t quite grasp the relevance of being competent enough in any language but English to the matter in hand." I believe the point that RichardKennaway was making was this one, which I've heard before: Many English speakers do not know, or are not fluent in, any other languages. We therefore should not feel entitled to criticize the English skills of someone who took the effort to become fluent in English as a second language. Also, your English skills are quite good. :-) "Besides, after spending a time on a simple comment, thoughts begin to race in my head that maybe, perhaps, it wouldn't really be something that the other people couldn't conclude or know on their own" I definitely have this problem too. I end up posting maybe half the comments I write.
2SilasBarta
Agree completely. But still I'm typically impressed with how well they can communicate, and so have little reason to criticize to begin with. On a slightly related note, I've had the opposite problem of people thinking I'm not a native English speaker (when yes, I am one). This only happens for in-person conversation: for some reason, they think I'm from Europe, usually Germany. (I speak German, but only from having learned it in school and having done a short exchange.) It happened again recently: I went to a meeting of a group I hadn't been to before, and, as is common, someone asked me where I was from, and was suprised to hear my answer of Austin, TX. He said he assumed I was from Germany from how I talk, which I would dismiss as a fluke except that he was the ~15th person to say that. I certainly admit that I don't sound Texan at all -- never picked up an accent for some reason. (I would link my youtube page, but I'm not sure any of the videos give a characteristic example of what I sound like in conversation.)
3NancyLebovitz
Your English (as shown in your comment) is more than good enough, but I don't know how much effort it took for you to write that comment. It sounds as though those racing thoughts are at least partially habitual. The only way to find out whether they are in fact redundant is to try posting-- and maybe even to ask whether a post you're unsure of is contributing anything new.
[-]Byron130

Hi!

I’ve been reading LW for about a year. Most of the rationalizations that came to mind for why I haven’t yet made the transition from lurker to poster boil down to social indifference or low conscientiousness.

Reading this topic made me think about why I hadn’t posted, and the more I thought about it, the more I realised that I hadn’t thought about why I hadn’t posted. Looking more deliberately at potential foregone losses in utility to myself (and maybe the community) from my non-involvement, it seems like I should force myself to at least see if I don‘t get downvoted.

Hi.

I have posted a few times, but I self-identify as a lurker because I only very rarely post, and feel increasingly disinclined to.

Or should that be "decreasingly inclined to"? Or are they equivalent? (See, this is why I don't post much.)

2NancyLebovitz
They're different. One is a decrease of desire, and the other is an increase of distaste. This doesn't mean that the only thing between them is a zero point of no reaction to the idea of posting-- there's also the possibility of mixed feelings.
2alasarod
yes!
0teageegeepea
Same here. I don't always read stuff here either though.

I've just introduced myself.

Hi.

I'm a lurking Australian psychology student. I'm trying to devour information and acquire the skills to help me to separate the wheat from the considerable amount of chaff in my field of study. I'm so fascinated by this blog (worked through most of the sequences in the space of about two months) because to be honest it has more content than my university course.

I have been toying with the idea of posting some of the arguments I've been in recently which would be kind of a case study where I could point to where they might have gone wrong in cognition, but I kind of feel that it might be a bit pedestrian to most readers of this blog.

4magfrump
I agree with Nancy. Case studies are very interesting; the few that I've seen have been voted up and very popular and I'd love to see more.
1RobinZ
I also support case studies - as much as science is maligned here for being too stringent with data requirements, there's a reason why ideas should be tested by experiment.
1NancyLebovitz
I'd be interested. This blog is both for the very abstract hypotheses and for applications of rationality.

Hi. I don't often comment because generally I doubt I can really contribute much. I'm lurking, but taking notes, I've still got a lot to learn but I plan to learn it: on top of this, I need a job, so I'm also attempting to tackle that at the minute, at an admittedly inefficient pace. The most karma I ever got was for a 'Selfish-Jeans' joke. Which admittedly was brilliant. But yeah. Hi.

[-]pra110

Hi. Been following since Overcoming Bias. Love you guys. If google has replaced our wet RAM these days, I feel like this community could replace my "aha" generator.

PS: I was amused by the presence of a captcha on a site where so much optimistic AI discussion has taken place.

Hi. I'm Thomas Colthurst. I will be doing a visiting fellowship at the Singularity Institute this summer.

Hi! I'm Patrick Shields, an 18-year-old computer science student who loves AI, rationality and musical theater. I'm happy I finally signed up--thanks for the reminder!

2Alicorn
Yay for musical theater!

Hi. I'm a lawyer, 25 from Canberra, Australia. My interest in reason/ logic/ truth-seeking is perhaps best explained by a quote: 'We live in Luna Park, not Plato's republic.'

Hi. Came here via Overcoming Bias. I've been reading for a long time but I haven't made the effort to go through the sequences. (On that note, is the essence of the "Mysterious Answers to Mysterious Questions" sequence that if you don't have a better predictive model at the finish than at the start, the answer is meaningless?)

I'm almost certainly moving to Germany to do an Economics Masters shortly but I'm interested in learning to programme because it seems like a productive skill in a way that Economics mostly isn't (Econometrics and to a lesser extent Microeconomics excepted).

So. I think that it would be possible to combine my studies with programming and Machine Learning and Statitics in a not-totally-insane way. Any tips on that would be great, as would the opportunity to talk, chat or otherwise communicate with someone in Germany, native or expat.

0sroecker
Did you know there is a degree program called Wirtschaftsinformatik (Business Informatics) in Germany?
0Barry_Cotter
Yeah, after my total lack of success in my exams study is no longer really an option, but I was considering Berufsakademie in WI. My German probably isn't good enough to get an Ausbildungsplatz though, and it's really badly paid. I'm leaning to teaching English at the moment and hoping to move into translation later.

Hi.

I'm a 24 yo male grad student (in Halifax, Nova Scotia) studying ecological math modelling.

This site is a gold-mine for clear thinking on the relationship between maps (models) and territories (systems). I'm interested in understanding and dealing with the trade-off between fidelity of the map to the territory and its 'legibility'. I've been lurking for about a year after coming across an article by Eliezer via Hacker News and got hooked.

3Breakfast
No kidding! Haligonian lurker here too.
0fburnaby
Very cool! I figured Canadians wouldn't be very well represented here, let alone Haligonians!

Hi, from Korea.

[-]tabsa110

Hi.

Following what Elezier does since SL4.

  • Male
  • 34
  • Technical Consultant (Learning Systems)
  • Atlanta, Georgia
  • Lurked 6 months
  • Via Overcoming Bias (but not really 'cause I ran across OB the same day)

Hi.

0[anonymous]
Cool! Always great to hear about other readers in or around ATL.
[-]Micah110

Hi. I've been reading lesswrong since the start. I had overcomingbias.com on my RSS feeds before that became Robin Hanson's personal blog, and followed the threads onto this site.

I don't generally feel the need to comment on the posts here. My mind does come up with questions and opinions from what I read, but I've found that if I wait long enough, someone else will usually chime in with something close enough to my own thoughts that I feel my point has been made, even if not by me.

I have thought of a few things that might have made an interesting top-level post here (and with these, I haven't always found someone else pipe up with the same idea), but I never got around to writing them, and with no comment-earned karma score, I don't think I could initiate a top-level post anyway. I guess I could write them as comments in the open threads, were I more motivated to do so, but as I have other priorities, I'd prefer to just read.

I don't find any of the above particularly problematic--I quite enjoy reading this site, even without writing anything myself. But, since my "hello" cannot be redundant here, no matter how similar it might be similar to other ones: hello everyone! Here I am!

And now back to lurking.

0JamesAndrix
If you want a lower barrier to entry, try the lesswrong subreddit: http://www.reddit.com/r/LessWrong/

Delurking from the woods of deepest Wisconsin. Doug Sharp here, old school game developer (ChipWits, King of Chicago http://channelzilch.com ), just finishing a novel about kickstarting the Singularity by stealing space shuttle Enterprise ( Hel's Bet http://helsbet.com ). Debugging the Human OS has been a longtime interest of mine, so I keep an eye on Less Wrong. As an ex-5th grade teacher, I'm interested in the possibility of translating ideas emerging from LW into teaching people how to think clearly.

4Jowibou
Glad to hear more people are thinking about rationality in reference to school age kids. Catch their brains while they're young. While you're at it - why not develop a game that teaches them to think clearly? And ermm...Hi.
7NancyLebovitz
Inventing new games isn't a bad idea, but there are already a bunch that would be worth promoting. Eleusis), Zendo), Penultima, and Mao are all games of inductive reasoning. And a list of games with concealed rules, some of them suitable for this project, and some of them just silly. Mao might be the best bet for getting started with a lot of kids-- it's already a popular game. For that matter, Twenty Questions might be a good place to start. There are some interesting claims of increased IQ at the WWF n Proof site-- I don't know how well founded they are, but the game implies the possibility of a similar game based on Bayesian logic.
2Jowibou
Thanks for the list Nancy, I will check them out. BTW your Zendo link points to Eleusis.
0NancyLebovitz
Corrected. Thanks.
2RobinZ
Quick meta aside: if you have a URL with parentheses, you have to put a backslash ("\") before each close-paren. It comes up a lot with Wikipedia URLs, or I'd just send a message.
2NancyLebovitz
Thank you. Corrected.
1dougsharp
Thanks for that list of games.
0dougsharp
I'd be happy to collaborate on that type of game!
35072035972357923
hi
3peregrine
Hey Doug, glad to see another Wisconsinite :) I am brand new round here, been reading for a while though. Goodluck!
[-]wuwei110

Hi.

I've read nearly everything on less wrong but except for a couple months last summer, I generally don't comment because a) I feel I don't have time, b) my perfectionist standards make me anxious about meeting and maintaining the high standards of discussion here and c) very often someone has either already said what I would have wanted to say or I anticipate from experience that someone will very soon.

3Strange7
Even if you know someone else is going to say it soon, do so yourself and you'll still get some of the credit.
[-]Fartan110

Hi.

19 yr old, male, Maths&Physics student from UK. Lurked on OB, then started lurking here when this place was made. EDIT: In case you want data on abnormalities among lesswrong lurkers here's two: Raised in Colombia as the son of missionaries. Self-taught.

2[anonymous]
In case you want data on abnormalities among lesswrong lurkers here's two: Raised in Colombia as the son of missionaries. Self-taught.

Hi. I'm a Caltech student in math/econ.

[-]tasuki100

Hi, I'm a lurker. You even managed to trick me into creating an account.

I believe that at least 50% of regular lurkers will not say "hi" in this thread.

Hello. Been lurking on OB and LW for ages. I actually end up forwarding quite a few posts along to a friend of mine that thinks everyone here are robots or soulless automatons because of the lack of respect for intuition. I keep telling her to come here and post her opinions herself, but alas, no bites.

This is me signalling that I'm smart: B.S. computer science, M.S. journalism, currently employed in the fine art auction world.

thinks everyone here are robots or soulless automatons because of the lack of respect for intuition.

A coworker was telling me that the law of conservation of energy means that the energy in our soul cannot disappear, only move.

I explained that the law includes that energy can transform, and that when we die, the "energy in our soul" serves to warm the panels of our coffin.

We haven't talked about it since.

1RobinZ
In both cases, there's an inferential distance that hasn't been covered.
3JGWeissman
Your friend may be interested in When (Not) To Use Probabilities, which does in fact explain why in some situations, humans should rely on intuition, rather than try to use probabilities we can't compute.

Hi, a financial analyst here.

Hi! I'd like to suggest two other methods of counting readers: (1) count the number of usernames which have accessed the site in the past seven days (2) put a web counter (Google Analytics?) on the main page for a week (embed it in your post?) It might be interesting to compare the numbers.

2gwillen
Hello! Fancy meeting you here.
0rntz
Nice to see you both.

Hi. I lurk because I haven't had time to read enough of the sequences, and because I usually read posts well after they are published. By the time I get around to reading an post, all of my arguments and counter-arguments are already presented for me in the existing comments. That's a big part of why I liked the site in the first place.

3CSmith
Agreed on all counts. Ironically, this is yet another example of everything I thought about saying already being said. But I suppose I will still add a hello, since that's what this thread asked for. Hello!

Hi, long-time lurker. Fell in love with the blog after two posts, and spent some productive hours reading the Quantum Physics sequence. I think I introduced the blog to the XKCD readership, or at least the ones who read the Science forums there.

2Kaj_Sotala
Was there any insightful discussion about LW on those forums?
[-][anonymous]100

Hello there.

I like the idea you're getting at, but there is a slight problem with it: you can never truly gauge the number of lurkers because some of them won't respond to this post. But I suppose you can get a better approximation, so I won't go as far to say that the whole thing is futile.

[-]Faber100

Hi!

Hi there!

I've been reading OB and LW for years and hardly said anything. This is typical of my behaviour on online communities generally, although it's worse here due to the unusual calibre of the discussions. Even this comment involved several edits and a lot of dithering, but since you asked...

[-]Sly100

Hi.

I often feel like I have very little to add. Hence the lurking. Also I only recently finished with most of the sequences.

Hi.

edit: I suspect LW has fewer lurkers than average. Speaking as a lurker, the conversations here are not easy to follow (this is more the structure rather than content, but sometimes the content gets pretty esoteric). I've limited my participation to reading top level posts of interest, and the comments if the article is sufficiently fresh.

[-]twanvl100

Hi,

I am (almost) a lurker. For some reason I find it very difficult to post anything in online discussions forums, so I usually don't.

[-]Jach100

Hi.

I've been lurking for a while, looks like. (My how time flies.) I'll throw my name in the pot of wanting more communication channels like IRC (looks like a room's setup, time to check it out!), especially less formal ones to ease transitioning to formal comments / top-level posts. The proportion of high-quality posts and comments around here seems awesomely high, but unfortunately makes it uncomfortable to just dive into. I also feel like I need to read all the sequences, in which admittedly I've made a pretty big hole so that there's not many posts left. (Currently going through quantum stuff, also picked up a copy of Feynman's QED.)

Since you asked, hi.

Hi.

RSS lurker from Helsinki.

5ChrisPine
And one from Oslo.

Hi. I am very pleased to find out that I can correct my spelling and grammar after posting.

Hello LessWrongers( Wrongites?)

Longtime lurker, from the beginning. Software dev for a bank. 23 yrs old. Great site

Hi. I've been lurking for a month or so now.

[-]Atoc90

Hi, have been lurking for about 3 years already, first in OB, now in LS. As non-native speaker with moderate IQ I find commenting difficult. However I enjoy most of posts, and LS introduced me to various new topics, therefore I am really thankful for all brilliant post writers. Thank you!

Hi! I don't feel qualified to contribute here, but I hope to fix that by... contributing here. I'll have more time to do so this summer.

When I was in school, I viewed myself as a defender of rationality against the fuzzy and ant-scientific positions that I sometimes encountered in the philosophy department. My meta-positions were eerily similar to those that are preached here.

Less Wrong fascinates me because, when I can stand to read it, I see that it is full of people who have similar background commitments and standards of evidence as me, but who have reached shockingly different conclusions.

shockingly different conclusions.

Do tell?

3BenPS
I suppose that the most glaring example is that consequentialism, in some form or other, seems to be accepted as obviously correct by most of the commenters here. So, It's funny that you should reply, since I recall that you may be an exception to that stereotype.

Yup, that's me, resident deontologist. The other day, during a conversation between me and some other house residents on ethics, someone said "She doesn't push people in front of trolleys!" and everyone was outraged at me.

6Nevin
To be fair, I don't think any of us were outraged at you. I think we were all trying to understand where exactly you make the distinction. I find I think the hardest (i.e. think the most differently from normal, habitual thought) when I'm pushed right to where I draw choice-boundaries. And actually I never quite wrapped my head around the basis of your view (I'm new to thinking about those things in such depth, since I've been surrounded by people who think like me). I'd like to continue the conversation sometime, in a more low-key environment. Oh, and "Hi." I'm a lurker.
2komponisto
Just out of curiosity, are there a lot of people at the SIAI house who confine their participation on LW to lurking?
1Nevin
I don't think so, but I'm not sure. I just happened to be there for the day, I'm not a resident of the house.

Hi, I'm Matt Stevenson. 24 yr old computer scientist. I work on AI, machine learning, and motor control at a small robotics company.

I was hooked when I read Eliezer on OvercomingBias posting about AGI/Friendly AI/Singularity/etc...

I'd like to comment (or post) more, but I would need to revisit a few of the older posts on decision theory to feel like I'm making an actual contribution (as opposed to guessing the karma password). A few more hours in the day would be helpful.

[-]Ryan90

Hi.

I comment pretty rarely but read very often.

EDIT: read mistercow's comment and I feel pretty much the same way.

Hi, first post, what is a point of karma?

0RobinZ
Every post and every comment has a karma number - on the post, it's the number in the circle next to the author's name; on the comment, it's the number between the date and the [-] thing that collapses the thread. "Vote up" adds one to this number, and "Vote down" subtracts one.

Hi, I'm 59 years old (which I'd guess is way over average in this community), an atheist (my parents were atheists but took us to a local church for a while, perhaps just to expose us to what's out there), avid reader, parent, husband, and programmer for over 30 years. I heard about OB when it started, from several other blogs. I read OB and LW fairly often, but not exhaustively--there's never enough time. I am skeptical of some conventional wisdom but also of alternatives. I didn't like collapsing wave functions when I took QM and also didn't like many worlds when introduced by a physics-major friend. (I doubt if I'll dig into QM again--my brain has lost some of its edge.)

Hello

I've been lurking for around 2 years or so. I'll introduce myself properly in the introduction thread.

Hi. Recently finished a B.A. in Philosophy, working in residential sustainability (i.e. 'Green Building') for the moment. I'll begin contributing once I've read through the Sequences.

Hi. I'm an over-aged college student from Philadelphia interested in studying almost exactly what this blog and Overcoming Bias are about.

Hi. Been following Eliezer since SS09.

Hi. I came for the quantum mechanics thread, and stayed for the love of Bayes.

Lurker for about a year. Made my only previous comment to this one a few months ago.

I almost never feel I have anything to contribute here. Even when I do, someone else has already expressed my thoughts in a comment more clear and thorough than anything I would have written. But this is a good thing!

Hi!

Delurking from Russia here. I’ve been reading LessWrong (and, consequently, OB, since it is often linked to on here) for about 3 months. I have to confess to falling in love with this website for the mind-stretching articles and comments in the threads. However, like many other lurkers have already said, I feel I cannot contribute anything due to lack of linguistic proficiency on my part and due to the fact that someone would already post something I would want to say. I decided to de-lurk and say ‘hi’ because you created the impression of talking to ea... (read more)

Well, I don't count as a lurker anymore but I only started posting about two weeks ago and lurked about 2 years before that so I think I qualify to comment about it. The only 2 forums where I post(ed) at all are LessWrong and INTPCentral.

INTPCentral was more of an experiment to see if I could sustain posting for an extended period of time. It didn't work and after 2 weeks I lost interest. LessWrong has less chance going the same way because of the high level of most top posts. That's my first barrier to post. The online community has to be interesting eno... (read more)

Reluctantly relinquishing my lurker status.

Long time lurker here. Seattle WA. I've been following what Eliezer has had to say since 2003. Started way back on extropy-chat mailing list and reading SL4 archives, read Overcoming Bias since around 2008, and now I read here. I only lurk because I find that getting involved in discussion is too interesting, it distracts me from my projects.

Hello. I've been lurking here and on OB for sometime now. I started reading OB at least at the beginning of 2008, possibly in the last few months of 2007.

Hi, lurker here (male, Chicago, attorney, 30). I am a regular Overcoming Bias reader who followed Eliezer to this site. To quote Buster Bluth: "You guys are so smart!" (slides off chair).

Hi, I'm a reader from Eliezer's OB days, still lurking as I don't have much time or much to add at the moment. Hopefully this will change soon.

Hi. I've been subscribed to the RSS for a few months now.

I've been reading this blog for about half a year now and loving it after Accelerating Future (I think) referenced it for something. I don't post many comments because anything I'd have to contribute usually already is, but I find that if you surround (or read) more intelligent people, they have this peculiar way of making you the same. Keep going Less Wrong, a lot of us are learning all sorts of great things from you!

hi. Why do I lurk? because I only visit occasionally, only for insight, and not because I feel any great need to belong. But please keep up the good work.
Naturally "less wrong" willl have an even higher percentage of lurkers than others. After all, you challenge the biases we use when we see ourselves, the world ... and less our conscious, identified selves know about that the better. But still, we return...

Hello. Long time lurker. Well I made an account a while ago and plan on contributing once I get the material. It seems like a wall I have to get over but I don't doubt I will with time.

kthxhi

Delurk:

Hi

Back to lurking...

[-]jimm90

Hi. I read the RSS feed.

Hi. I'd never not lurked anywhere until I not-lurked here now.

Hi. Like others have said, I tend to not post because I feel I can't add anything constructive to the discussion.

I don't think there's anything wrong with that though. A good part of learning can be knowing when to be silent and listen to what others have to say.

5bufu
Hi. And agreed.
0Kevin
I was that way for about 8 months -- I've been a member of Less Wrong since it was turned on, but almost all of my karma has been acquired in 2010. I had a lot of free time and so I jumped in by replying to comments on the recent comments page. My tips for doing it successfully are to look for comments where you can add a small point of additional information, or have a minor disagreement with a point of the comment. In order to make sure you don't lose karma for doing this, couch your words in linguistic uncertainty, using phrases like "I think".
5alasarod
You sound a little too confident when you say "In order." Oughtn't you hedge that statement?? :) And hi.
0Kevin
Yes, yes, I totally deserve to lose large amounts of karma for being too certain. Hello!
4wedrifid
No you don't. You're wrong. Downv...
[-]0sn90

Hi. I keep forgetting to log in, and mostly just watch the front-page feed in Google Reader, but I do pass interesting articles and posts along to friends and family. They generally seem to like it, so that's good. I'm interested in what you might call community outreach via my comics where I try to subtly involve issues of rationality and such. Feel free to drop by and suggest themes I should use.

0RobinZ
Thanks for the link!

Hello; I enjoy reading this site, but feel kind of inadequate to actually post something when so many of the main postings here are so erudite.

Hi.

Why would Less Wrong have an abnormally high percentage of lurkers? Also, being a lurker is not in black and white. For example. I mostly just lurk, but I post comments occasionally.

3Kevin
I think Less Wrong has an abnormally high percentage of lurkers because if participating at any web site is intimidating, participating at Less Wrong is especially intimidating because of the high level of discourse and English linguistic proficiency. For the strictest definition of lurker, if you have registered for an account you are not or are not longer a lurker, but the definition is really not important.
4apophenia
I read the blog for two month before getting an account, and then continued to lurk, only upvoting and not commenting. I found that I felt like an observer without an account, and a silent participant with one.
2gregconen
Also, the karma system adds an additional barrier, at least in my mind. Knowing that your comment is going to be explicitly judged and your score added to a "permanent record" can be intimidating.
3Jowibou
Whether we like it or not, that "intimidation" may be the single most important factor in keeping the level of discourse in the comments unusually high. Status games can be beneficial.
2gregconen
Indeed. I'm not saying the karma system is a bad thing.
0AlexMennen
This definition of lurker has the advantage of being clear-cut enough that numbers are meaningful, but does not represent as important a group in online community dynamics as the definition of lurker as someone who reads but does not post, regardless of whether or not he has an account. Also, with that definition, I have not been a lurker for quite a while, and yet I appear to be accumulating free karma points for saying "hi" anyway. Not complaining.

Hi! This made me register: first barrier overcome. I don’t think I will ever contribute that much, but maybe I will add a comment now and then when I have something intelligent to say. What I have read here and on OB has contributed quite a bit to my thinking.

Hi. This motivated me to register instead of just RSS-lurking. So that removes one barrier to potential future participation.

Hi. Too bad High Five Day went already.

2jtolds
oh no! i was totally scrolling down to post hi when i saw this. I put high five day in my calendar as the 19th of april, and so I was super stoked for tomorrow. who knew it was the third thursday? not me. :( what a bummer also, hi!

Hello! I'm currently doing a depth-first read through the sequences, and I've been enjoying all of it so far. I'm another one drawn in by HP:MOR, but I found even more here than I could have hoped for.

Hi. Got sucked in to the site via MoR (of course), and have been devouring the sequences and related archive material for about a month or so.

Hi, I am a 24 year old physics student from Germany.

Hi, i'm a biology student from Germany. I stumbled upon this page and I really, really like it. I'm spending hours reading!

Hi. RSS lurker for a few months, 25 yo PhD student living in the Netherlands. MSc in cognitive neuroscience.

Hi. EconPhD student in Philadelphia. Found OB through Marginal Revolution a couple years ago.

Hi all. 25 yo New Yorker here. Been following this site for a while now, since Eliezer was still writing at OB.

Currently I'm working on two tech startups (it's fun to not get paid). My academic background is in cognitive psychology. In addition to AI, rationality, cognitive bias, sci fi, and the other usual suspects, my interests include architecture, poker, and 17th century Dutch history. ;)

1LucasSloan
Have you read An Alternate History of the Netherlands? It is a pretty fun what-if about how Dutch history might have gone better for the Dutch. I wouldn't recommend reading past the present day however, the author isn't very good at projecting future technology trends.
0baiter
Cool, I will take a look. I've frequently wondered how things would've developed had the Dutch been able to hold on to New Amsterdam...

Hi!

And I wonder why the word Rationalist has multiple meanings. You are clearly a Rationalist in one sense of the word but in this other sense (thankfully, because it is not good to be a Rationalist in this other sense): http://www.thefreemanonline.org/featured/michael-oakeshott-on-rationalism-in-politics/ you are not.

Would you perhaps write a short post about it? Thanks in advance.

3JGWeissman
From Newcomb's Problem and Regret of Rationality: If it turns out that the techniques we advocate predictably lose, even though we thought they were reasonable, even though they came from our best mathematical investigation into what a rational agent should do, then we will conclude that those techniques are not actually rational, and we should figure out something else.
0SilasBarta
Hm, the article in the link raises some interesting issues, given the goals of this site. People here want to develop artificial, generally intelligent beings (AGIs), which involves specifying, unambiguously, what you want a machine to do in a way that it will be as creative (or more) and capable as humans are. Oakeshott refers to an attempt to instruct (humans) by pure reference to theory-driven rules as "rationalism" and considers it a huge error. Now, both LWers and Oakeshott would agree that to learn about the world, you have to interact with it, and the more, the better. But you can see the conflict between his worldview and that of this site's frequenters. While Oakeshottians will dismiss any kind of non-apprenticed teaching as futile, those here wish to use deep theoretical understanding of the lawfulness of intelligence to create beings that can learn with different restrictions than what humans have; and also, to break down this "tacit knowledge" humans use in complex tasks, into steps so simple a machine could follow them. Historically, the latter paradigm has been rife with failures next to ambitious promises, but in recent decades has made impressive strides in doing things that "of course" a machine could never do because of the "infinite" rules it would need to learn. Also, Oakeshott's critique is reminiscent of the discussion we had recently about how much (useful) knowledge you can convey to someone merely through explanation, without passing on the experience set. I supported the view people typically overestimate the extent of the knowledge that can't be explained and give up too easily in putting it in communicable form. Btw, the author, Gene Callahan is an antireductionist I've argued with in the past (that's a link to a part of an exchange I moved to my blog when he kept deleting my comments).
[-]tim80

Hi, I made a couple posts a while back but recently have been simply lurking.

I would like to comment more and I think it would benefit me to toss my ideas out there and get some feedback. I think part of the problem is that while I have a decent understanding of many concepts promoted here (probably level 1, beginning to pass into level 2 on the Understanding your understanding scale) fully articulating my thoughts in a coherent and original manner is difficult. Most notably when discussing things with friends I find myself falling back on examples I've ... (read more)

Hi.

I've been lurking here and on OB for a couple of years. As other people have said, there seems to be a large amount of prerequisite knowledge required to post here. I usually find my own thoughts expressed more clearly by someone else in the comments, so I up-vote rather than just adding noise.

[-][anonymous]80

Hello there, I've been reading the site for around six months now. I am an education student; LW has certainly changed my perception of human behavior and learning, and has given me much to reflect upon.

"Hi"

(Just standing up to be counted.)

G.

Hi. Long-time lurker since Eliezer was posting at OB (which candidly I find far less interesting these days). I'm 37, and am a practicing lawyer with several small children; this keeps me sufficiently busy that I don't often have time to think hard enough to post here, although the discussions are usually quite interesting. Also, I'm pretty non-quantitative due to misspent undergraduate years. I view this site as place where generally I should be listening, not talking.

[-]Paul80

Hi.

I've been lurking since early OB. I am not here due to being Singularitarian but I've been using this site since I was in high school and through college to help keep myself from being a charlatan in any intellectual endeavor. I find that it takes regular reminders and dedication to not extend past the limits of my knowledge, and both OB and LW continually help to fine-tune my internal sense of "what I don't know."

To give a bit of a frame of reference, I'm studying social sciences and my specific problem domain is Educational Psychology and I'm i... (read more)

[-]anz80

Hi!

Hi.

Mostly-lurker here, save for the occasional mildly pithy comment. I'm a DBA/sysadmin by day, studying towards an Econ + Maths degree in my spare time. LW has a lot of parallels with my fields of interest, elucidates on a lot of areas where I have half-formed ideas and provides exceedingly worthy arguments for things I don't agree with.

It’s so much easier to be a non-contributing zero. But I find myself unable to back down from an open request to drag myself out of the shadows of lurk and into the light of the rationality justice league. Part of the appeal of lurker status for me comes from my outlook on this site in general. I haven’t exactly figured out what I’m doing or what I believe in; but I do know I’ve still got a lot to figure out. Lurking lets me passively ponder interesting ideas proposed here without really committing to anything in particular. But having been prompted to post something I find myself uncertain as to what my level of involvement should be in this idea mill of rationality and humanity.

What if I'm not witty or rational enough to post a thought provoking idea?

[-]Jens80

Hi.

Hi, Started reading at Overcoming Bias before the split. Mostly following Eliezer's fiction, but also enjoying the deconstruction of human blind spots.

[-]Erin80

Hi. (sinks back into the shadows)

Hi. I see that the first point is free.

I am a Bay Area (California, United States) 19 year-old Computer Science student. I imagine I'll actually be taking actual CS classes next year. I've been lurking about for about a month.

0sketerpot
Man, taking freshman introductory classes is a drag. At the risk of insulting your intelligence, I feel compelled to remind you that the most important thing to learn in a CS curriculum is to go out and learn things that aren't taught in your classes, through the power of the internet. For example, you could go out and learn the first programming language in this list that you don't already know: Python, Lisp, C, Haskell. Or read about how some sorting algorithms work. Or write a compiler. Or whatever strikes your fancy, really.
0purpleposeidon
I am very familiar with python, and a little bit familiar with C. (I am also a sophomore, not freshmen :P) I spent an hour looking at lisp once, but never got into it. As for Haskell, I have seen it, and it looks weird. I've'n't done much real algorithmic work. I wrote a (Warning: shameful self-plug) parser for Lojban, but it only works through trial-and-error and dumb luck.
0sketerpot
I looked at your code. Why aren't you in "actual CS classes" yet? You're obviously qualified.
0purpleposeidon
Because lame community college isn't all that great. I hope Berkeley agrees with you. :)

Hello. :I

Hi! Hooked since OB sequences - and need to go back for several of them.

Hi, so Ive made the switch from Lurker to Lurker-With-Log-In (LWLI)

Im a young geologist and artist...

.....very interested in Neuroaesthetics at the moment, maybe I'll post some thoughts on it when im well read enough.

Keep challenging me :)

Hi. Jeffrey Ellis, 44 yr-old multi-disciplined engineer working at Johnson Space Center. I blog all about critical thinking at The Thinker, http://jeffreyellis.org/blog/. Came here from Overcoming Bias when this place started up.

[-]Volt80

Hi there. I suppose I might as well register and post.

I'm an information science grad student. I've been following the community for a few years (since Eliezer wrote on Overcoming Bias), but haven't been commenting because most of this stuff still seems a bit over my head (and I have lots of catching up to do).

Ha. Was this comment as useless as I think it is?

Not sure if I am a lurker, but HI

Hi. Accountant, 29. Currently in the process of signing up with CI, should be complete by the end of the month. Wish Eliezer would write more fiction. :) But I love everything on here. Been lurking for about a year?

3Kevin
Did you see his Harry Potter fan fiction? http://www.fanfiction.net/s/5782108/1/Harry_Potter_and_the_Methods_of_Rationality
0sketerpot
Incidentally, a fine way to deal with fiction deprivation is to go forth and write some of your own. It's fun, and most people who try it report that it's hugely rewarding. (Either that or they don't say anything at all. I have failed to prove my thesis! But it's worth trying anyway.)
0RobinZ
I found it incredibly tough, personally - the activity itself did not feel rewarding, just (at best) the results.

Hi!

(Lurking since Eliezer had still been writing his sequences on OB.)

Hi.

I enjoy the posts, but I usually don't have anything interesting to say on the topic. Still, I can never turn down a free karma point, so here I am.

Just registered to say hi. So, "Hi."

I'm a technical writer/ultra-part-time grad student at Northern Illinois University in Rhetoric & Professional Writing (working on my thesis so slowly). I also write stories and other such things.

Followed the wave from Overcoming Bias.

Saying, "Hi."

Hi, haven't read much on the site yet, but it has certainly grabbed my attention.

[-]Shae80

Hello.

Female / Web developer / 41 years old / rural Indiana native

I've commented a few times, but not many.

Hi.

I intend to become more active in the future, at which point I will introduce myself.

I registered just to say hi :) Just some info for your statistics. 21 years old, male, Industrial Design student from Buenos Aires, Argentina. I made it here via Rationally Speaking.

Bye

Hello, I'm an undergrad student who's been reading LW for about six months now. So far I've stuck to lurking for a couple of reasons. For one thing, most of the comments I have are already made by other people. Also, there's enough information on LW that it seems more fruitful to move on to a new article than to post a question.

There's a LOT of background reading available here on LW, which is intimidating to a new reader. I can say for myself that it's difficult to bring myself to post when I know there are dozens of background articles I still need to re... (read more)

Hi. I've been following Eliezer's stuff since CaTAI. Been a lurker on extropy-chat, SL4, OB and LW. I remember once participating in an #sl4 chat, and being unable to post due to my accelerating heartrate.

Lurking can be debilitating. Well, symptom rather than the disease I guess.

[-]Gaks80

Hi.

Hello, I lurked for a long time. I've started dipping my toes in the water.

[-]hjkl80

Hello.

Hi!

I'm a highschool student who has been reading (and lurking) lesswrong for many months now. I have always found the blog posts to be very insightful and enlightening, and I greatly enjoy reading them. I'm a young aspiring transhumanistic biologist who just can't wait to get his hands dirty debugging and retooling the human body and mind! Please, keep up the wonderful posts, and I will be sure to contribute as soon as i find that i have something really good to say.

I lurk on almost every forum that I read on the internet. The mere fact that I'm logged out of a forum that I'm registered on can be enough to cause me to say, "screw it" and not post for months. I frequently get, "Wow I remember you" as a response to my sparse postings.

My penchant to lurk coupled with my lack of confidence that I have anything worthwhile to contribute to this community made it seem doubly unlikely that I would ever post anything here. But I'll stand up and be counted now as part of this experiment, as it's the only contribution I can really make.

Cheers, and thanks for posting all this delicious lurker chow.

Hi. I'm a lurker here, working on my PhD in Computer Science at the University of Wisconsin. I've only been reading for the last few months, but I've gone through all the major sequences in the archives.

3zero_call
Cool... Engineering physics grad student at UW.

Yeah I'm a lurker...

Although now I have an account, I guess I have no excuse...

Hi. I've been lurking on OB+LW for around two years. I took the step of making an account a few months ago. Eventually I'll post something meaningful.

Hi. I was an occasional contributor on OB and have posted a few comments on LW. I've dropped back to lurking for about a year now. I find most of the posts stimulating -- some stimulating enough to make me want to comment -- but my recent habit of catching up in bursts means that the conversations are often several weeks old and a lot of what needs to be argued about them has already been said.

The last post that almost prompted me to comment was ata's mathematical universe / map=territory post. It forced me to think for some time about the reification of ... (read more)

Hi

Been reading Less Wrong religiously for about 6 months, but still definitely in the consume, not contribute phase.

It feels like Less Wrong has pretty dramatically changed my life. I'm doing pretty well with overcoming Akrasia (or at least identifying it where I haven't yet overcome it). I'm also significantly happier all round, understanding decisions I make and most importantly exercising my ability to control these decisions. I'm doing a lot of things I would have avoided before just because I realise that my reasons for avoiding them were not rational... (read more)

Hello!

Greetings!

Hello.

[-][anonymous]80

Hi!

5NancyLebovitz
Have you found that Lojban makes it easier to think clearly?
[-]ig0r80

Hello

Hi.

I only subscribed yesterday, and I didn't even have an account before now, but I'll consider myself a lurker and post here. There probably won't be a better time to join the community anyway.

Nice to meet you guys.

2Alicorn
Apropos of nothing: How does one frog?
0gwern
Well, the OED gives several possibilities. If one is 'frogging', one is 'catching frogs, fishing for frogs'. One might 'frog' a coat - that is, apply 'frogs' ('An attachment to the waist-belt in which a sword or bayonet or hatchet may be carried.' or 'An ornamental fastening for the front of a military coat or cloak, consisting of a spindle-shaped button, covered with silk or other material, which passes through a loop on the opposite side of the garment.'). And so on.
3NancyLebovitz
Knitters and crocheters use "frogging" to refer to undoing defective work. Rippit! Rippit!
0OneWhoFrogs
That it's actually a verb surprises me. I was just intending it to be a pun on the game Frogger. I thought, "one who runs is a runner, so what does Frogger mean?"
0gwern
If there's one thing I've learned from buying an OED, it's that every damn word in English has an amazing number of variations and meanings.
0OneWhoFrogs
"define: frogger" ...it was the best username I could think of, at the time ;).

I just read the RSS feed for a Yudkowsky fix since he left Overcoming Bias.

Hi there. I lurk, mainly for the purpose of learning, but also because of significant time demands elsewhere.

I'm a lurker. I follow via the rss feed. LessWrong is in my "firehose" folder, meaning its a limbo-state. I might promote it to an actual folder or I might unsubsribe.

At least, thats until I find some more nonsensical classification scheme for my rss feeds.

Lo!

(I apparently had an account already, although I didn't remember this until I tried to comment and my usual name was taken in the registration screen.)

Hey ho.

Hi, been reading this site since it split from OB, but have never commented, though on occasion I have been tempted.

Hello ~

I've been reading this site for several months, but I still feel unqualified to actually post anything. I've yet to entirely read all of the sequences, and I also lack the math/science background that appears to be relatively common here (I'm an industrial design student). As a result I'm (perhaps excessively) wary of posting something that's redundant or has a glaring flaw I ought to have been aware of.

Thanks for giving an excuse to make a first post, though.

Hi.

I registered and started posting a while back, but since then have reverted to lurking. Partly due to not having time, but I can also identify with reasons some others have given.

Hi,

i am technically not lurking as i prepare my anti-akrasia article.

Martin

4gwern
Hm, you're submitting frivolous comments as a way of not preparing an anti-akrasia article... Oh the ironing!
2MartinB
No i actually have it prepared already, but still collect data from my own experience and my beta tester. But i appreciate the irony, thats what we all here for after all :-) Martin
3MartinB
And now i just figured out that i am a few karma points short. So i lurked too much after all :-)
3apophenia
Yes, I'm having a similar problem with my article.
0Morendil
Here you go. :)
0jimrandomh
Fixed that for you. Post away!
0MartinB
thanks, hope the article is worth it :)

Hi.

I'm a grad student studying social psychology, more or less in the heuristics & biases tradition. I've been loosely following the blog for maybe six months or so. The discussions are always thought provoking and frequently amusing. I hope to participate more in the near future.

Hello

I've only been aware of this site for about a month. While i find the articles and discussions enlightening, probability theory is still very new to me. Once i have a more intuitive grasp of its implications I plan to participate more heavily

3JGWeissman
Hi You may be interested in Eliezer's Intuitive Explanation of Bayes' Theorem.
4Alicorn
Or Kaj's What Is Bayesianism? for a more intuitive version.
3Kevin
You don't actually need a good grasp on probability theory to participate here. I certainly don't have a good grasp on Bayesian statistics. A lot of the discussions here are qualititative.
0Daniel_Burfoot
Anecdotally, the most strongly upvoted articles tend not to be specifically about math and statistics, but rather about meta-thinking issues: how insights from AI, cog sci, stats, social science and so on can help improve our thought processes.

Karma pls! Oh, I mean, hi.

Ah, hi there...

Edit - please disregard this post

1Dorikka
Hi!

Hi. I, too, came here through HP:MOR. I've been reading through sequences on and off for the past couple of months. I occasionally click on links to recent comments.

Hi, and all. I just joined and stopped exclusively lurking, despite my love of a certain Starcraft Unit.

A lot of the recent posts revolve around AI and I have level 0 AI knowledge, so the lurking is far from over.

But hi nevertheless. I'll try to contribute where I can and not to where I can't, so there.

Hello. I read on a 30 day lag, so that's why I'm just now posting.

3RHollerith
Details, please. Do the items you read get to you via RSS? ADDED many days later. Looks like I will have to wait 30 days for my reply :) :)
0JoelCazares
Or 7 months. Sorry about that! The 30 day lag is because Google Reader will purge any unread posts after about 28-30 days. So I try to read what I consider important before I lose it forever. This of course means that I end up just not reading anything except for what is about to get deleted. Ah, procrastination. I didn't see that you had replied to this until I randomly looked at my profile on the actual LW site. Usually I just passively read LW wisdom in google reader, and not on the LW site proper.
0RHollerith
Thanks for the reply. The mechanics of how people use sites like this is one of my interests.

Hi. Business&Computer Science grad student from Finland. Just found the site yesterday and started devouring the content today :) Great stuff!

Greetings from Canada.

I'm an audio mixer, working mostly for Discovery Channel, with an interest in science and transhumanism. Been lurking for a couple of years.

Hi all, I'm a physics student who's been lurking here since January or so...I'm generally pretty quiet.

0Nisan
Shorah!

I've been lurking here for six months or so; I think I got here from Overcoming Bias through a link from Marginal Revolution. I try not to come here more than once a week because I end up spending too much time here due to the extensive interlinking.

Hello, 22 year old engineering student from Sweden, finally took time to create an account after observing OB and LW for more than a year.

hi! i'm 20, originally from Moscow and currently an undergraduate senior majoring in computer science and mathematics at a pretty decent university in california. starting my masters in CS at a much better school in california next year. i've only recently discovered this site, but i hope to spend much more time on it in the near future

Hi. Been reading the RSS feed for 3-4 months now. Slowly beginning to make sense of it all... understanding the specialized vocab and so forth. It's always been my goal to be as self-aware as possible, so I'm glad of all the interesting ideas here.

Hi. I've been lurking for quite a long time, first on OB then here.

Computer engineering student, interested in AGI and rationality. And foreign languages and stuff.

(Edit: I am especially interested in the mathematical formalization of AI - my hypothesis is that strong AI is a disorganized field in need of a more formal language to make better progress. Still a vague idea, which is why I'm just a lurker in the AI field, but I am quite interested in discussion on related topics.)

I'll say Hi and I'll post this link which describes a study that showed that people are more likely to believe in pseudoscience if they are told that scientists disapprove of it:

http://www.alternet.org/module/printversion/146552

They are also much more likely to believe in pseudoscience if it has popular support.

Hi, I came here via Overcoming Bias. I study Computer Sciences in Germany.

Hi, I've been reading this blog for a while now, and I was thoroughly surprised to find so many like minded thinking people. I haven't commented any, because quite frankly I've had nothing to say. Hello all though.

Hi, I discovered this blog very recently - I have an economics background (milton friedman a big influence) and a growing interest in philosophy. This site popped up while I was searching for the 'underdog bias' (that think must be some level of human 'moral instinct') and this led me to the 'Why support the underdog?' article and then others. I'm really impressed by the high standard. Nick

Bonjour ...

Hi. I've been an LW (and previously, OB) lurker for several years, but I haven't had time to provide my online presence with the care and feeding it needs. Three years of startup crunch schedules left me with a life maintenance debt, and I have a side project in dire need of progress, but once those items are out of the way I plan to delurk.

Hi! 8-)

Hi!
Found this site searching for fiction via Tv-tropes.
While I'm a new reader, I'll likely lurk a lot.
The internet is a constant deluge of input - my instinctive counter is
to provide output only when I have something interesting to say, hoping others will reciprocate...
(And even then, I only feel comfortable when what I say is concise, relevant and new.)
After all, thousands of people might read my message; wasting their time would be unspeakably rude.

0RobinZ
Fiction via TV Tropes ... the "Three Worlds Collide" page? (I ask because I'm the one who started it - glad to hear it was useful, if it was!)
[-][anonymous]70

Hi. I lurk here and read every post but rarely never really felt like commenting. Neat blog though.

Made an account just to say "hi"

So ... Hi!

Hi! Lay-lurker here, I was just recently considering posting some questions in the next open thread and made an account then. We'll see how that goes, but it's nice to see this welcoming attitude!

However, a concern I have about more people being more active, and a reason I haven't signed up before, is that if more laypeople like myself begin to vote up things regularly, they will necessarily be posts that we both like and understand. If we don't understand something, it doesn't get upvoted with equal footing as posts we don't necessarily understand but may be of equal or greater value. Is there a comprehensive thread/discussion about the pros/cons of a greater user base here?

Hi. Nice to meet you all. :)

Hello. I don't make the time for active participation in this community, but I enjoy my read-only interaction with it.

I have a sense that the time commitment required for effectively participating in this community is relatively high, and I haven't discovered yet whether this time investment pays back.

Hi. Long time listener, first time caller.

Hey ho all. I'm based in Canberra, Australia (and New Ireland, Papua New Guinea), do website development/design for a living, engage in climate change discussion a lot, and ended up here by the circuitous path of stumbling across "Harry Potter and the Methods of Rationality". Marvellous piece of work, which I found quite resonant. I'm very impressed with what I've seen so far of Less Wrong.

Hello, I'm studying Bioengineering at ASU, in Arizona. Right now I'm in Finland for the year. It's been an utter blast. Cool people and life-improving experiences. I'm not sure I want to go back to the US..

I would love to learn more and more about status. That's currently the most interesting thing for me. It applies directly to me, right now, as I'm in a new group of people with lots of group interactions in Helsinki, Finland. I can use that information right now.

Not so interested in the probability discussions.. Perhaps those are more interesting to others, but I have read a few of them and subsequently skipped the rest.

Thanks for your time!

"Staring into the Singularity" introduced me to the idea of the Singularity eight years ago (I was 16). I read SL4 for a few years after that. I've been sort of a casual follower of OB for a couple years, and just added LW to my RSS.

Hi.

Hi, there! Trying to get through the sequences. And past akrasia...

[-]rntz70

Hi. CS undergrad at CMU here. More interested in decision theory specifically than rationality in general. Might post more if I had more time.

Hi.

I've posted comments twice, I think, but my read/write ratio is high enough that I think I still count here.

Hi, I'm probably even of lower status than a lurker, since I don't read this blog regularly. I do like it a lot, however, and it's been on my RSS-feed list ever since Eliezer moved here from OB. (I was subscribed to and irregularly followed the posts there, too.)

I pop by whenever something catches my attention in particular. Aspiring composer from Washington (state, not D.C.) here.

Hi! I'm a lurker, even though I apparently already had an account here. Can't even remember when I made that...

I'm not sure I count as a lurker, but I'll stop in and say hi anyway.

About me: I have a BS in Computer Science from Carnegie Mellon University; now I work for a tech company, writing software and babysitting servers.

Hello everyone,

have only been reading LW for a couple of months, might start contributing in a few more.

Greetings from Munich!

Hi.

Hi!

Greetings from Knaresborough, North Yorkshire, UK.

"Hi" seems inadequate. Salutations from a wanna-be prolix pedant? No?

Good morning, people. I'm assuming it's morning somewhere. Adam, from Australia. A friend of mine's been talking about this site for a while now. I had an unusually misanthropic weekend, full of people committing crimes against reason and logic, so I decided to search for some rational thinking. I remembered this place, loved it when I first clicked on, and have subscribed.

Hey friends. I was able to join in a couple of fascinating LW/OB NYC meetup conversations; I don't comment here much but certainly read daily. Thanks for all the thoughts/insight.

Lurking from Tampere, Finland

Hi! Longtime RSS reader from Mountain View.

Hi.

It's been quite a while since I posted here, so long that I initially couldn't remember my username. I rarely have much to add, and even though "I agree with this post" posts are, I think, slightly more accepted here than in some places, just agreeing doesn't by itself motivate me to say so most of the time.

Hello I guess.

Hello.

Now can I get some Karma score please?

Thanks.

Hi. I may have posted a comment or two, cannot remember. But I have been lurking for a long time.

3wedrifid
Click on your name. It'll show you that you commented a recommendation for Caldini (good book btw!)

Hey!

I'm subscribed via RSS, so I don't really see comments, but I might start lurking on the actual site.

Hi. I am a very occasional participant, mostly because of competing time demands, but I appreciate the work done here and check it out when I can.

Hi! I discovered this site via OB a few months ago and have been lurking ever since. I've commented only twice before but have been reluctant to comment more as I haven't yet read anywhere near as much of LW as I would like. I'm very interested in many of the very common topics of discussion here, such as rationality, AI, etc, and hope to be able to make a contribution to our understanding of one or more such topics in the future.

Thanks for the excuse to comment, and to the LW community at large for creating such a fascinating site.

Hi! I'm not anti-posting, but I never do for some reason.

I'm not sure if I count as a lurker...

I comment enough that I can top-level, but all of my comments come in relatively short spurts of activity interspaced by much longer periods of inactivity (say a day or two of activity per 1 or 2 months). Perhaps a good standard would be to go up to a randomly selected group of readers and ask if they know me by my screen name. Last time I checked this the answer was no, so I guess I'll call myself a lurker, but if anyone objects, I won't say boo.

Anyway, hi!

5Kevin
The definition is not very important, but I don't think you count as a lurker. Lurking is more like total non-participation, not occasionally participating. You're also probably above the median for karma on Less Wrong. Anyways, all are welcome in this thread.

Hiya! Everywhere I go I primarily lurk, the reason being that commenting just takes way too much time for me. I find it very difficult to put my thoughts into words, and I constantly obsess over small details. As a result, even a simple comment like this can take up to 15 minutes to write.

2A1987dM
I obsess over small details... after submitting the comment. Hence I will often edit the same comment half a dozen times. (I love sites where I can't edit my own comments!)
4Jack
Interesting handle.
0DrRobertStadler
Thank you.

Thanks, but that doesn't necessarily tell me the supposed "stronger" arguments, nor does it relate directly to my own post. In fact, it leaves me more confused than before about why my post was deleted, and more convinced than before that the supposed danger is unreal.

6wedrifid
There aren't any. That seems to be an appropriate assessment.

Hi. I've joined late, and posted on the "Hi" thread late.

Hi! I too found the site through MoR, and I have to say, as fun as MoR is, the posts here are even more interesting.

0RobinZ
Welcome! If you want to post a more formal introduction, you can use the regular Welcome thread. I don't know if you caught the conversation about introductory posts a while back, but if you want some easy jumping-in points besides just going through the series, I posted a bunch of links and a couple others were suggested.

Hello! I am 27, live in Salt Lake City (I suspect it's unnecessary here of all places, but I will reflexively add the caveat that I am not Mormon), and work in software QA. Came here from Overcoming Bias, which I've been reading since it's early days. At this point a lot of the higher level stuff is quite a bit over my head, but things like Alicorn's luminosity sequence and various anti-akrasia topics are pretty interesting to me.

Well, I guess if one of the people I recommended this site to is going to post here, I ought to do so as well.

24, male, engineering major working as a software developer. I started reading back in the Overcoming Bias days in order to understand what the hell two of my roommates were talking about all the time; there's a lot of material here that needs to be read and mentally cached before you can start cross-referencing it in your brain, at least in my experience. It's been a worthwhile effort, though.

I must have commented on at least one or two posts back when the blog was part of OB, because my normal username NthDegree256 has been eaten.

Hi.

I'm 20, an amateur rationalist, currently majoring in linguistics at SF State, and have been enjoying lurking here for the past few months. Ive been absorbing what I can from posts that are slightly over my head, but are entirely enlightening and enjoyable nonetheless. Funny story- I actually came across this site web crawling after reading some Lovecraft, and Yudkowski's post "An Alien God" came up. Not at all what I was looking for, but a thoroughly pleasant find that got me crawling this site for a good three hours before I had realized I h... (read more)

hi ~ 61 yo here

amateur interest in neuroscience, nature of consciousness, & the irrational thought processing/response involved in PTSD (the flashback, “a past incident recurring vividly in the mind,” is driven initially by epinephrine, followed by glucocorticoids, most notably cortisol. This happens with lightening speed deep in the limbic system where ‘triggers’ or stressor patterns of association have formed around the traumatic memories. Recognizing and defusing or reducing this neuroendocrine bath, when it is an inappropriate response from the past, is an important key in unlocking the complexity of the PTSD)

Hi.

I've posted an article, and commented once, but still feel like I'm figuring things out here.

Thanks to everyone who is bolder in their contribution than I am.

Well, hello. I like this place and it gives me things to think about, but I don't have the energy to post more than a wee comment or question occasionally.

Cheers!

hi from Germany. Been lurking here from the beginning. So, be careful with what you say. We, lurkers, are watching you.

Hi I'm a Phd student in AI. I found this site through the Bayesian tutorials and got interested in the decision theory discussions.

Hi. Just got here yesterday by way of a link from the "Harry Potter and the Methods of Rationality" story, which I loved. I found the story by way of a link from David Brin's blog (I've been a fan of Brin for a long time now).

1Jack
Frankly, I'm surprised Brin hasn't showed up here himself. (Welcome btw!)
0Eoghanalbar
Oh thanks! Quick reply, there. I don't suppose you might know if/how I can enable email notification of replies to stuff I say here? I think Brin kind of has his own, what was the word he used... "blog-munity", and he's pretty busy on top of that (or SHOULD be, anyway) with that novel that's supposed to be an update to "Earth". I'm just starting to look through the "Sequences" here. A lot of it feels very familiar to me, as I became a major Richard Feynman fan at a relatively young age myself, but I am sure I can find plenty to improve on nevertheless. I also, more recently, became a fan of Michel Thomas, a name which is probably less likely to be familiar to people on the site. Basically, he was a language teacher, with a rather distinct, and in my personal experience, extremely effective methodology. So I tracked down the one book I could find on that methodology ("The Learning Revolution" by Jonathan Solity). That lead me to "Theory of Instruction" by Sigfried Engelmann and Douglas Carnine, which I have just cracked open... The point is that they claim to have an real, actual good scientific theory (parsimonious, falsifiable, replicable, etc) of how to actually teach optimally, by doing a rational analysis of the material to be taught so that it can be conveyed to the learner in a logically unambiguous way... Okay wait, no, the REAL point is that there's a REALLY good way to teach ANYTHING to ANYONE so that EVERYONE could learn a hell of a lot more, way faster and way easier. Or at least they say there is, and I'm sufficiently impressed with them so far to be saying, wow, this needs a LOT more attention. And then, once we have this, we can start using it to teach all those things that really need to be taught better, for example these "methods of rationality"... http://psych.athabascau.ca/html/387/OpenModules/Engelmann/evidence.shtml
1Jack
No emails. But the replies show up in your inbox (which is that little envelope beneath your karma score which turns red when you get new mail).
0Eoghanalbar
Cool thanks.

Hi! Been lurking for a while, at least occasionally.

Had to create a new account to post, and had some trouble--it seemed that it was cached badly, maybe because scripting was disabled when I first hit "register"? Clearing the cache fixed it, though.

Hi, I study CS at Stanford, and I've been reading LW for about 6 months.

0webspiderus
undergraduate or graduate? I will be starting my masters there next year ..

Hi, I'm a 28 year old video game music composer trying to understand my mind. I've just been reading random posts here for a month, but so far I love this site.

0Alicorn
You might be interested in my luminosity sequence if you are interested in learning to understand your mind :)

Hello, I'm Simon. I'm studying a PhD in Economics. I cannot recall how I first began to read your blog. I don't manage to read everything, but I appreciate what I do read as it is often outside of what I customarily read. I don't find I have the time to comment properly as I'm spending time on research and teaching and coherent comments would be beyond me I fear after teaching undergraduate microeconomics for three hours.

Hi. I've been reading fairly religiously (haha) since the Overcoming Bias days. I post/comment little because of a perfectionist tendency (I want to get everything first).

I'm in the process of thoroughly going through the Sequences -- love every minute of it, though it's sometimes a little overwhelming...

[-]sfb60

Hi

Hi.

I've only posted a few times. I'm still learning, and I still feel quite overawed here, mostly because of my respect for this community and because I don't want my image tarnished before I start regularly posting.

Add one more!

Hi, I'll be going back to lurking momentarily.

Not sure if I count as a lurker, since I've posted a few things here and there, but I've never introduced myself properly, so "Hi!"

I discovered LW via OB, which I discovered via researching Hanson's ideas on prediction markets... my primary interest is in Hanson-esque ideas on designing social institutions to be Less Wrong.

I've been gradually bringing myself up to speed on Eliezer's writings, and I am still somewhat skeptical on singularity-related issues, but less so than when I first started reading.

I have no impressive sounding credentials to ... (read more)

Hi. I work at a company that does statistical analytics for insurance companies. I've been following SL4 topics ever since I was 12, when I Asked Jeeves about the meaning of life and got a reasonable answer. I used to be a regular in the #SL4 IRC channel, but very rarely posted to the mailing list. I'm even more of a lurker here.

Not properly a lurker, but I never introduced myself, did I?

Hi, informatics just-barely-still-a student here. Also amateur philosopher, where I find that studying AI gets me far more insights than reading philosophy ever did.

Unless the philosopher is called Eliezer. Good work.

[-]Hans60

Hi. I've made a few posts here and there, but have mostly been lurking lately.

Hi

I must say that I consider myself a lurker and even though I wish I had something constructive to add to , I often don't.

I see a lot of karma etiquette talk here. Are there guidelines for awarding karma points?

One issue comes to mind - the popularity sort combined with the fact that many people often only read the first few comments on any blog.

0RobinZ
Well, that's the guideline - an upvote promotes a comment to greater attention on the popularity list, and a downvote demotes it. Those are the facts - everything else is pure theory. :)

What's the point of this? Surely there are more direct ways of doing a survey of how many users we have? Or are you just trying to encourage participation?

Commitment effects!

... and if unregistered users are inspired to say hi, it greatly reduces the marginal cost of them making comments in the future.

HI!

I don't know if anyone will read this as all the comments seem to be at least a decade old. I was linked to this post from another about total user counts on the site. I'm an 18-year-old computer science student from the UK, with a keen interest in self-improvement and rationality. 

This site has continually amazed me with post after post of creative, thrilling, eloquent and in many cases practical insights. As much as I recognise my slight perfectionism, I'm waiting until I can really contribute something of value so that I don't diminish the excel... (read more)

Hi all - been lurking since LW started and followed Overcoming Bias before that, too.

Hey there -- I'm a 44 year old software developer from Hawaii. I stumbled onto LessWrong through a link on story-games.com several months ago, have worked my way through the Sequences, and have been lurking assiduously ever since.

[This comment is no longer endorsed by its author]Reply
[-]ValH50

I'm a brand new lurker. I just found the site yesterday, but it will likely be a while before I get up the courage to post something relevant :)

hello, American math guy living in beijing.

Hi! I got here about half a year ago from commonsenseatheism.com .

I'm 20, automotive engineering student, also interested in many fields of science.

[-]miah50

Hi. By day I am an eikaiwa teacher in Japan, by night am a lurker! I found this site through my cousin.

This is one of the only feeds in my RSS reader where I'm compelled to click through and read the comments. Thanks.

I love LW - its one of my favorite reads, though I don't quite fully appreciate some of the more advanced rationality posts yet. Thank you all for making a great community.

"Immediate adaptation to the realities of the situation! Followed by winning!"

Hi,

I've posted a few comments to LW, but maybe I still qualify as a lurker because I post comments so rarely.

Some recent experiments with Alicorn's Luminosity techniques revealed that my reasons for not posting comments more often were mostly silly, so I'll probably start commenting more often.

This post got kinda long as I was writing it, so I'll post each of the things I wanted to say as a separate reply, so that they can be upvoted or downvoted separately.

9PeerInfinity
I've been making lots of progress recently at untangling my mind, with lots of help from Adelene Dawner, and Alicorn, and LW in general. The methods I used are similar to what Alicorn describes in her Luminosity Sequence, but I started a few months before the Luminosity sequence was written, and I didn't have any contact with Alicorn until a few weeks ago. Anyway, I was considering the idea of posting my experiences with these techniques, either to LW, or maybe someplace else if LW wouldn't be appropriate. During this process, I kept a very detailed journal, using Google Wave. What I was planning to do was first to review the contents of this journal, and make a point-form list of the problems I was having, and the steps I took to discover, find the causes of, and fix these problems. Then I plan to post this list to LW, and ask what parts, if any, the readers would like me to elaborate on, or post any relevant journal entries on. And also to check how much gooey self-disclosure readers are comfortable with. There's lots of that in the journal. The journal contains lots of introspective writing, and lots of chat logs with Adelene, where we found what was causing some of these problems, and discussed what to do about them. Partway through this process, I started using the technique of writing dialogues between multiple subagents, similar to how Alicorn described in this post Now I'm constantly making very extensive use of this technique, with surprisingly good results. Anyway, if anyone thinks I should go ahead with this plan, please upvote this post. Or if you think it's a bad idea, please downvote this post. Yes, I said downvote. I'm not afraid of downvotes (anymore). Another idea I was considering was starting a separate blog, for the few things I wrote that other people might be interested in. Or maybe even for this project. The first person who thinks this is a good idea, please post a reply saying so. And if anyone else thinks this is a good idea, then y
0Blueberry
I would love to see your full journal. If you don't want to post the full thing here I'd still love it if you emailed it to me. Sorry I haven't been on skype recently, but I'm glad to see you posting again!
1PeerInfinity
Heh, my journal is way too huge to post all of it here, or to email it, and very little of it would actually be relevant to LW anyway. If you have a Google Wave account, I can just give you access to the journal itself. And if you don't have a Wave account, I've been copypasting most of it to livejournal, though that kinda caused lots of trouble with the formatting. I can give you access to that if you have a livejournal account. But I still don't dare to make the whole thing publicly accessible. There's lots of, um... unflattering stuff in there. Unflattering to me, and to some of the people I know. I'm glad to hear from you again too! hugs :)
0NancyLebovitz
Do I need to friend you in order to see your livejournal? I'm nancylebov over there.
0PeerInfinity
Yes, I need to friend you in order for you to see my livejournal. And so far I only friend people who are, um... actual friends. Would you like to be an actual friend? Or were you just curious about the journal?
0NancyLebovitz
I'm just curious about the journal. I'm not sure whether I want to be an actual friend, though you seem like an interesting person.
1PeerInfinity
I checked out your recent LiveJournal posts. You seem like an interesting person too, someone who I would like as a friend. I went ahead and added you as a friend on LJ. I guess I should warn you that the journal is full of gratuitous self-disclosure. I write about literally anything that I feel like writing about. And the quantifiedself experiment means that I document literally everything I do, though I still don't have much of a life, so this isn't all that much. Though I guess there's no need for me to be so paranoid with these warnings, and no need for me to be so paranoid about who I give access to.
1Scott Alexander
Just saw this; I'm interested in seeing the journal. My LJ username is squid314. I wouldn't be an "actual friend" as in buy you stuff for your birthday, but I check my friends page every so often and respond to anything I find interesting.
1PeerInfinity
I added you as a friend on LJ. heh, now I'm going to have to write something in the journal about what I actually think of as the conditions for qualifying as an "actual friend"... but I guess I won't try posting any more about that to this comment until I know what I actually want to say. And I guess I might as well repeat the other warnings about the journal. I write about literally everything that seems even remotely worth writing about, and that's lots of stuff, and most of it is boring. The journal contains X-rated content, and often TMI. And then there's all the quantifiedself data, and the confusing system of tags and abbreviations... Anyway, I guess you can see for yourself. Feedback is welcome. random trivia: I prefer not to follow society's annoying rules for being socially obligated to exchange gifts at specific times of the year. If anyone wants to something nice for me, please just donate to SIAI instead, or possibly some other charity of your choice.
2Scott Alexander
Thanks, Peer. Note to anyone else considering this: He is not kidding about it being huge, daunting, and unformatted. Not even a little.
0AdeleneDawner
You might want to mention that the journal is rather nsfw/x-rated, tho.
0PeerInfinity
hehe, yes, that too, thanks :) actually, I think I set up the LJ account to automatically give an "Adult content" warning. lol, and so far I've been getting away with writing it while at work, including some of the x-rated stuff...
8PeerInfinity
My main reason for not commenting more often was because I was afraid that... hmm... I made a few attempts to finish this sentence, but so far all of them triggered one of the excuse-generating modules that I've noticed in my brain. So maybe I'll just leave that sentence unfinished. Basically, I was afraid that posting more comments would somehow have net negative utility, for reasons that it turns out don't actually make sense. So I guess I'll start posting anything I think is relevant, until I start getting downvotes. Rather than asking here about whether specific things would be a bad idea (copypasting stream-of-thought comments I wrote while I was reading a LW article, without bothering to erase bits I later realize don't make sense?), I'll just go ahead and post until I start getting downvotes. I have a bad habit of overestimating the badness of negative feedback. Even if karma isn't a perfectly accurate measurement of whether my comments are having a positive or negative effect, it's still a reasonably useful approximation, so I'll go ahead and just try to maximize my total karma, rather then preemptively panicking about any post that I suspect might get downvoted... which is pretty much any comment I could possibly make... And then there's the Umeshism "If you've never posted a comment that got downvoted, your comments are boring", or would that be "If you've never posted a comment that got downvoted, you're not posting enough comments"? Then there's the question of how much time to spend reviewing and tweaking my comments, but I guess karma can answer that too. Then there's the question of whether to treat a zero-karma post as actually having negative value, just cluttering the comments thread... that seems like a more difficult question.
5PeerInfinity
I've been following Eliezer since shortly after he started posting to SL4. Back then I went by the name "observer". Oh, and I donate lots of money to SIAI. The past couple years it was between $6000 and $7000 (about 20% of my income), but I plan to donate more from now on. This year I pledged $20,000 (over 60% of this year's income), and I might not even need to take money out of savings in order to pay this. Seriously. I'm a hardcore Singularitarian. A Yudkowsky Singularitarian, not a Kurzweil Singularitarian. And yes, I like the word "Singularitarian" :) I have a user page on the Less Wrong wiki I also have a user page on the Transhumanist Wiki
0PeerInfinity
vote down this comment if you disapprove of this method of using subcomments to get more targeted upvotes and downvotes. I was about to request that noone upvote this comment, but on second thought, that's silly. Upvote if you want. I could make separate comments if I wanted to track upvotes and downvotes separately, but past experience has shown that there aren't likely to be enough total votes for that to be worthwhile.
-1PeerInfinity
vote down this comment if there is anything at all you don't like about anything I've written here: Using lots of words to say something that should be simple wasting words talking about things that I was about to say, but already realized were silly general whinyness or needyness parenthetical comments run-on sentences incompletely thought-out ideas not bothering to use the special syntax for making clickable links poor spelling and/or grammar minor or not-so-minor typos posting too much off-topic stuff in what's supposed to be a thread for people to just say "hi" the order these subcomments appear in on the page general incoherence or incomprehensibility general insecurity this list being too long anything else that you personally dislike oh, and didn't mean to put the emphasis entirely on voting. Comments would be more helpful than votes, so please comment if there's anything you want to ask or comment about.
1NancyLebovitz
As a matter of dubiousness which on the edge of dislike: using upvotes and downvotes rather than encouraging conversation.
0PeerInfinity
lol, I knew there would be something I didn't think of to add to the list, thanks. comment upvoted :) er... wait... did I actually discourage conversation?
3NancyLebovitz
I'm not sure that you actually discouraged conversation, but you put so much emphasis on voting that I felt as though conversation was falling off the agenda.
0PeerInfinity
ok, thanks, I'll add a note to those other comments mentioning that conversation is encouraged. comment upvoted :)

Hi.

Any comments I've made have been in the last few months. Ive been lurking this site since its inception.

I lurked until I read something I really disagreed with.

If we have a higher percentage of lurkers, then what bell curve are regular commenters on the far end of?

4NancyLebovitz
Several bell curves I should think-- knowledge of the sorts of thing LW specializes in, free time, and self-assurance, at least.

Awe, this made my night! Welcome to all!

[-]Illa40

Hi.

Hey all.

Basics: 23 NY "Self-taught" Mixed Background. I'm mainly interested in group rationality.

I've read OB, on and off, since late '07 and LW since the beginning. Almost never comment either. I still don't know a chunk of the jargon. Can't tell sometimes if I don't understand a post, or the jargon is confusing me to think I don't, when I already may understand the topic.

I'm weary of blogs. I think a popular blog/blogger creates a cult of personality. It raise its author's status far too high. That makes them high status stupid. And us low status stupid. And subsequently this botches any true community creation attempt.

Hi, I'm fascinated.

howdy do da. i finally brought myself to comment the other day. I may post some thoughts soon enough. i've found this website to be pretty influential. i'm here for the long run

Hi. Still reading through, but got some thoughts a-bubbling.

Hi, I'm a lurker mostly because I was reading these off my RSS queue (I accumulated thousands of entries in my RSS reader in the last year due to work/time issues),

0Paul Crowley
Hi, welcome to Less Wrong, thanks for delurking!

25 yr old business consultant from India. Been a lurker for the past 6 months, ever since i got here through a random google search on probability.

I don't post because it takes me a day or two to really 'click' on most of the discussions. By then, I usually find everything I want to add is already in the comments section.Will join in as soon as I have something significant to contribute.

Keep up the great work!

[-]taw30

I wonder how you're going to enforce your karma targets if other people are more generous (as seems to be the case already).

7wedrifid
Particularly since the first thought I had when I read the '4 karma' norm assertion was "Where is a comment with 4 karma? I need to vote it up." This wasn't contrariness precisely, rather I thought: * Someone can only introduce themselves here once. That means this isn't a gamable karma source. * The commenters are lurkers, it really doesn't seem to be a huge problem giving lurkers a few karma when they signal that they are friendly enough to respond to a greeting and have some interest in engaging with the community. * The primary difference that karma has for new members is that it is a requirement to make posts. Long time lurkers who engage in friendly introductions (in an implicit engagement with community spirit) are not the class of people who I would want to prevent from making posts. * The only post by a lurker which should not have been made was by someone who would not have responded to an invitation from (mere) Kevin, so this limit wouldn't have helped. In fact, that post was made despite the poster not meeting that karma qualification! * The only real 'free karma fest' risk here would be if the OP karma-spiralled. In fact, given the *10 multiplier the OP has gained more karma so far than all of the introductions combined! * It just isn't Kevin's place to specify how other people must vote. Me violating the 4 limit makes it less likely for other more compliant individuals to feel constrained by a barrier that is purely imaginary.
5Jack
Given the 10x multiplier the OP doesn't need more karma, but I would like to see this promoted, since it would probably reach more lurkers that way.
1wedrifid
I don't object to the OP being upvoted (and have done so myself). I merely give perspective on relative karma festivity.
0Kevin
Yeah, I didn't expect this to get me as much karma as it did, but I underestimated the lurkers that would vote me up! I would also like to see this promoted, but I don't care about the karma points.
1Kevin
I got rid of it because I decided it didn't matter.

Greetings everyone.

I am feeling somewhat lethargic at the moment having just gotten off work, but I am pleased to see such a dedicated set of individuals who take the time to debate such a variety of topics and engage in rational discourse. Self-critique is important (love the name; Less Wrong).

As far as I am concerned everything we think we know is wrong. There is only "less wrong." Some things we have a pretty good grasp on and may only be .0000001% wrong. But I have to wonder just how many things actually fall into that category and how much of it is "wishful thinking" or hubris on our part to think that we know more than we actually do.

MTF

Hi :) recent neuroscience grad, currently doing neuropsychopharm research. love the site. got here through rebelscience.org, i believe

Hi. UK lurker. Found Overcoming Bias many years ago from a link from Scott Aaronson's blog. Have been reading ever since. In case you're interested in demographic stuff, I'm a stats geek working in a finance firm. I'm very interested in Bayesianism in its application to finance.

I lurked til a few weeks back when I read something I really disagreed with.

Tom the Folksinger at your service. Come by MySpace/tomloud for a stupid song or two. My continuing thesis is an investigation of the effects of organized sound on higher organisms. I am a voter registrar and I can show you the latest in Industrial Hemp products. Did you know Hemp herds can be mixed with a little lime and water and it will vitrify and make its own cement? I can give people knowledge but I just can't get them to think with out lighting literal fires under 'em. And Y'all know what that is like...

Hi, I am still reading LW and also recommended books, papers, fanfics :D

In the future I type again. Wonderful content and community. Very, very good.

Hi.

I guess I have some abstract notion of wanting to contribute, but tend not to speak up when I don't have anything particularly interesting to say. Maybe at some point I will think I have something interesting to say. In the meantime, I've enjoyed lurking thus far and at least believe I've learned a lot, so that's cool.

[-][anonymous]00

Hi, this would be my second post. I got here from Harry Potter and the Methods of Rationality. I've decided to move to active participation, so not expecting to remain a lurker for long. However, I have more reading to do first (Sequences). You wouldn't want an uninformed participant, especially if they're as argumentative as I know myself to be.

Indeed, part of why I think this community might prove worth posting in is that, compared to most anywhere else, it doesn't seem easy here to get away with just "having an opinion" - without putting in effort to understand what you're talking about.

Hmm. Make sure you back up your comment, if you value it.

Regarding the suggestion that the mechanism doesn't work, you can see something similar with VHS vs Betamax. The VHS team could pitch: "don't buy Betamax - because if you do you will suffer the pain of throwing all your videos away when we ultimately win".

Personally, I figure that the VHS team can make pretty sure that people will think that for themselves anyway - thought censorship or no.

3Mitchell_Porter
The mild LW censor is more subtle than that. Comments can continue to exist but do not show up unless you find the right path to them. It's apparent that to have a sane policy on this matter, Eliezer would have to change his mind. I cannot tell whether the existing policy is mainly supposed to prevent people from thinking scary thoughts, for the sake of their own well-being, or whether there is some genuine fear that possible AIs in the future will malevolently affect the past by being sketchily imagined in the present - which is absurd. Or maybe it's some other variation on this idea which we're all supposed to be tiptoeing around. But the effect of the censorship (however mild it is) is to make people unable to think and talk about the problem in a rational and uninhibited manner. I really think that the key issue is the possibility of transhuman torture, and whether we permit that to even be mentioned. The current policy seems to be, that I can talk about the possibility of a maximally unfriendly postsingularity AI torturing the human race for millions of years, but I am not allowed to talk about whether a proposed information channel, whereby a possible but not yet existent AI supposedly threatens people with this in the present, makes any sense at all, because just thinking about it is traumatic for some people. I submit that this policy is inconsistent. The proposed information channel does not actually make sense, and in any case all the trauma is contained in the raw possibility of transhuman torture occurring to us, some day in the future. You shouldn't need the extra icing of quasi-paranormal influences to find that possibility scary. We should separate these two factors - the mechanics of the information channel, and the terror of transhuman torture - and decide separately (1) whether the proposed mechanism makes sense (2) whether the topic of transhuman torture, in any form, is just too psychologically dangerous to be publicly discussed. I say No to b
3Richard_Kennaway
As I understand the original posting and Eliezer's response to it, the problem is not that some over-delicate souls might be distressed at a hypothetical danger. The (alleged) real problem is far worse: it is that thinking about these scenarios is the very thing that makes you vulnerable to them. And to twist the knife further, the problem isn't limited to UFAIs. You might end up being tortured by a FAI, if you didn't manage to think about these things in just the right way. Better to remain safely ignorant -- if you can, having read just this much. I can't resist pointing out a religious analogue. There is a Christian belief that people who lived and died without the opportunity to hear the Word of God may still be saved if they nevertheless lived good lives in ignorance of the divine commandment. (Historically, I think the purpose of this doctrine was to protect the writings of the ancient Greeks and Romans from wholesale condemnation and destruction, but that's by the way.) However, people who have had the opportunity to hear the Good News but reject it are damned without mercy. In God's eyes they are worse than the most depraved of those who were ignorant through no fault of their own. Some "Good News", and some "Friendliness"!
0timtyler
Surely that depends on exactly what you define "friendly" to mean.
-4wedrifid
It certainly seems to. Somewhere on my list of "ways to stop an AI from torturing me for 10 million years" is "find anyone who is in the process of creating an AI that will torture me and kill them". I'm not overly concerned what name they give it.
-1Richard_Kennaway
Since Eliezer considers it rational to prefer TORTURE to SPECKS, an FAI to his specification would presumably do the same. In either case, too bad if you're the one who gets TORTUREd. Maybe 3^^^3 people to SPECK will never be created, but what is one person compared with even the mere bazillions that FAI-assisted humanity might produce in mere billions of years? You need to make very sure you're one of the elect before creating God. The parallels with Christian theology just keep coming. Thanks to Timeless Decision Theory, you were either saved or damned from the beginning. When you attain to the correct dispositions to be immune to counterfactual blackmail, you do not become saved, but discover that you always were. And do not delay, for "Every day brings you nearer to everlasting torments or felicity." "Your transgressions have sent up to heaven a cry for vengeance. You are actually under the curse of the Almighty." The Bible makes a lot of sense read as a garbled account of an AI that played around with the human race for a while and then went away.
0wedrifid
Which brings us back to... who is creating this unfriendly AI that is going to torture me and where do they live?
0Richard_Kennaway
Probably the same people who push fat people under trolleys. I wonder what sort of AI Peter Singer would want to create?
1timtyler
So: I wonder if you got "mildly censored". All I can see now is "comment deleted".
[-][anonymous]00

Hi! I too found the site through MoR, and I have to say, as fun as MoR is, the posts here are even more interesting.

[-][anonymous]00

Hi

I have been an atheist all my life 50 years. If other peoples culture made them believe in god then I suppose mine made me be an atheist and think that it is very important to know right from wrong. I want to know why there are so many believers I suppose there are many reasons such as fitting in with ones family/culture "cognitive miser" dysrationalia wishfull thinking terror management theory thinking that reason and doubt really is the devil in their head memes as a virus or super organism running the show
just not knowing better many ... (read more)

[-][anonymous]00

Hi.

[-][anonymous]00

Hello.

Female Web developer 41 years old rural Indiana native

I've commented a few times, but not many.

[-][anonymous]00

Less Wrong is pretty intimidating. Thus if you comment here, you are either dumb or smart. But most are just smart enough to know that they are too dumb to contribute something valuable. There are some exceptions like people asking questions though...

[-][anonymous]00

I'd never not lurked anywhere until I not-lurked here now.

[-][anonymous]00

Hi.

[-][anonymous]00

Hi.

[-][anonymous]00

Hi.

[-][anonymous]00

Hi.

Hi. I'm a part-time lurker, part-time active participant.

6Kevin
...with karma in the 95th percentile. :P
1RobinZ
(How'd you calculate that, by the way? Just eyeballing, or is there a page?)
0Kevin
(Just eyeball... on further reflection it may be more like 80th percentile. I know that on Hacker News, the karma distribution is exponential with a quick fall-off and I expect the distribution here is very similar.)
0MichaelGR
Funny you mention Hacker News, I'm about 100 karma points from being in the top 100 there (though under a different name). I suspect there's a pretty big overlap between the LW and HN crowds. I wonder if there's a high correlation between karma on one site and karma on the other?
[-][anonymous]00

Hi! This made me register: first barrier overcome. I don’t think I will ever contribute that much, but maybe I will add a comment now and then when I have something intelligent to say. What I have read here and on OB has contributed quite a bit to my thinking.

[+]Clippy-80