Attention Lurkers: Please say hi
Some research says that lurkers make up over 90% of online groups. I suspect that Less Wrong has an even higher percentage of lurkers than other online communities.
Please post a comment in this thread saying "Hi." You can say more if you want, but just posting "Hi" is good for a guaranteed free point of karma.
Also see the introduction thread.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (617)
Hi, I am still reading LW and also recommended books, papers, fanfics :D
In the future I type again. Wonderful content and community. Very, very good.
Hi.
I guess I have some abstract notion of wanting to contribute, but tend not to speak up when I don't have anything particularly interesting to say. Maybe at some point I will think I have something interesting to say. In the meantime, I've enjoyed lurking thus far and at least believe I've learned a lot, so that's cool.
Hi.
Hiya! Everywhere I go I primarily lurk, the reason being that commenting just takes way too much time for me. I find it very difficult to put my thoughts into words, and I constantly obsess over small details. As a result, even a simple comment like this can take up to 15 minutes to write.
I obsess over small details... after submitting the comment. Hence I will often edit the same comment half a dozen times. (I love sites where I can't edit my own comments!)
Hi
Hi.
Hi
Hi.
Interesting handle.
Thank you.
Karma pls! Oh, I mean, hi.
Ah, hi there...
Edit - please disregard this post
Hi!
howdy do da. i finally brought myself to comment the other day. I may post some thoughts soon enough. i've found this website to be pretty influential. i'm here for the long run
Hi. I, too, came here through HP:MOR. I've been reading through sequences on and off for the past couple of months. I occasionally click on links to recent comments.
Hello! I'm currently doing a depth-first read through the sequences, and I've been enjoying all of it so far. I'm another one drawn in by HP:MOR, but I found even more here than I could have hoped for.
Hi. I've joined late, and posted on the "Hi" thread late.
test
Greetings everyone.
I am feeling somewhat lethargic at the moment having just gotten off work, but I am pleased to see such a dedicated set of individuals who take the time to debate such a variety of topics and engage in rational discourse. Self-critique is important (love the name; Less Wrong).
As far as I am concerned everything we think we know is wrong. There is only "less wrong." Some things we have a pretty good grasp on and may only be .0000001% wrong. But I have to wonder just how many things actually fall into that category and how much of it is "wishful thinking" or hubris on our part to think that we know more than we actually do.
MTF
Hi, and all. I just joined and stopped exclusively lurking, despite my love of a certain Starcraft Unit.
A lot of the recent posts revolve around AI and I have level 0 AI knowledge, so the lurking is far from over.
But hi nevertheless. I'll try to contribute where I can and not to where I can't, so there.
Hi.
I am not actually a lurker - I currently have 13 karma - but I am not a heavy participator. However, now I would like to get to 20 karma so I can make a post on why MWI makes acausal incentives into minor considerations. I would also be gratified if someone told me how to make my draft of this post linkable, even if it does not show up within "new".
I think that you should get some bonus towards the initial 20 karma for your average karma per post. This belief is clearly self-serving, but not necessarily thereby invalid. I believe my own average karma per post is decent but not outstanding.
I believe that the businesslike tone of this post, as a series of declarative statements, will be seen as excessive subservience to the imagined norms of a community of rationalists, and thus net me less status and karma than a chattier post. I am honestly unsure if the simple self-referential gambit of this paragraph will help or hurt this situation.
I posted a diary, and it was banned for containing a dangerous idea. I can understand that certain ideas are dangerous; in fact, in the discussion I started, I consciously refrained from expressing several sub-points for that reason, starting with my initial post. But I think that if there's such a policy, it should be explicit, and there should be some form of appeal. If the very discussion of these issues shouldn't happen in public, then there should be a private space to give whatever explanation can be given of why. A secret, unappealable rule which cannot even be discussed - this is not the path to rationalism, it's the way down the rabbit hole.
What? Is this separate from the recent Banned Post? Is this a different idea?
It was a counter argument against the dangerous topic being dangerous, which by necessity touched the dangerous topic and which wasn't strong enough to justify this (anyone for whom the dangerous topic actually would be dangerous [rather than just causing nightmares] would almost by necessity already be aware of a stronger argument).
Interesting. Thanks, uprated; with the caveat that of course, we only have your word that the other argument is "stronger".
Without further evidence, it's my rationality plus consideration of the issue minus overconfidence against yours. You have an advantage on consideration, since you know both arguments while I only know that I know one; however, on the whole, I think it would be pathological for me to abandon my argument and belief just on that basis. As for the other aspects, we're both probably smarter and less biased than average people, and I don't see any argument to swing that.
In other words, I still think I'm right.
No posts on Riddle Theory.
If we're going to keep acquiring more banned topics, there ought to be a list of them somewhere.
You just lost the game.
Response to this above. (attached to grandchild)
Nor joke warfare
Nor pictures of birds.
Nor writing "Bloody Mary" in lipstick on mirrors?
Seriously, my post was about why that stuff is not scary. Fiction can be good allegory for reality, but those stories all use a lot of you-should-be-scared tricks, all very well and good for ghost stories, but not conducive to actual discussion.
We are swimming in a soup of sirens' songs, every single day. Dangerous ideas don't just exist, they abound. But I see no evidence of any dangerous ideas which are not best fought with some measure of banality, among other tactics. The trappings of Avert Your Eyes For That Way Lies Doom seem to be one of the best ways to <strong>enhance</strong> the danger of an idea.
In fact... what if Eliezer himself... no, that would be too horrible... oh my god, it's full of stars. (Or, in serious terms: I'm being asked to believe not just in a threat, but also that those who claim to protect us have some special immunity, either inherent or acquired; I see no evidence for either proposition).
Gah, it's incredibly annoying to try to talk about something without being too explicit. The more explicit I get in my head, the more ridiculous this whole charade seems to me. Of course I can find plenty of rational arguments to support that, but I also trust the feeling. I'm participationg in the "that which must not be mentioned" dance out of both respect and precaution, but honestly, it's mostly just respect. You're smart people and high status in this arena and I probably shouldn't laugh at your bugaboos.
Just to point out some irony - I'm participating in the "that which must not be mentioned" dance out of lost respect. I no longer believe Eliezer is able to consider such questions rationally. Anyone who wants to have a useful discussion on the subject must find a place outside of Eliezer's influence to do it. For much the same reason I don't try to discuss the details of biology in church.
FWIW, it seems pretty ridiculous to me too. It might be funny - were it not so negative.
Plus, if you don't do the dance just right, your comments get deleted by the moderator.
Very good question, but AFAIK Eliezer tries to not think the dangerous thought, too.
Seconded.
I don't think there was ever any good evidence that the thought was dangerous.
At the time I argued that youthful agents that might become powerful would be able to promise much to helpers and to threaten supporters of their competitors - if they were so inclined. They would still be able to do that whether people think the forbidden thought or not. All that is needed is for people not to be able to block out such messages. That seems reasonable - if the message needs to get out it can be put into TV adverts and billboards - and then few will escape exposure.
In which case, the thought seems to be more forbidden than dangerous.
If there was any such evidence, it would be in the form of additional details, and sharing it with someone would be worse than punching them in the face. So don't take the lack of publically disclosed evidence as an indication that no evidence exists, because it isn't.
It actually is, in the sense we use the term here.
I think we already had most of the details, many of them in BOLD CAPS for good measure.
But there is the issue of probabilities - of how much it is likely to matter. FWIW, I do not fear thinking the forbidden thought. Indeed, it seems reasonable to expect that people will think similar thoughts more in the future - and that those thoughts will motivate people to act.
It's not a special immunity, it's a special vulnerability which some people have. For most people reading the forbidden topic would be safe. Unfortunately most of those people don't take the matter serious enough so allowing them to read it is not safe for others.
EDIT: Removed first paragraph since it might have served as a minor clue.
Interesting.
Well, if that's the case, I can state with high confidence that I am not vulnerable to the forbidden idea. I don't believe it, and even if I saw something that would rationally convince me, I am too much of a constitutional optimist to let that kind of danger get me.
So, what's the secret knock so people will tell me the secret? I promise I can keep a secret, and I know I can keep a promise. In fact, the past shows that I am more likely to draw attention to the idea accidentally, in ignorance, than deliberately.
(Of course, I would have to know a little more about the extent of my promise before I'd consider it binding. But I believe I'd make such a promise, once I knew more about its bounds.)
So apparently either "that which can be destroyed by the truth should be" is false, or you've written dangerous falsehoods which would overtax the rationality of our readers. Eliezer's response above seems to imply the former.
Did you read the "riddle theory" link? The riddle is not dangerous because it's false, but because it's incomprehensible.
And of course, if you meant to list all the possibilities, you left out the ones where E. is just wrong about the danger.
My comparison at the time was to The Ring.
Hi! I too found the site through MoR, and I have to say, as fun as MoR is, the posts here are even more interesting.
Welcome! If you want to post a more formal introduction, you can use the regular Welcome thread.
I don't know if you caught the conversation about introductory posts a while back, but if you want some easy jumping-in points besides just going through the series, I posted a bunch of links and a couple others were suggested.
Hi. Got sucked in to the site via MoR (of course), and have been devouring the sequences and related archive material for about a month or so.
Hi. Still reading through, but got some thoughts a-bubbling.
Hi. The Harry Potter fanfic hooked me. Excited to see where this takes me.
Careful, Clippy is lying. By convention, we here at Less Wrong play along with Clippy's claim to be a moderately intelligent, moderately strange Artificial Intelligence whose utility function is entirely based on how many paper clips exist in the Universe. He might be your friend, but he has been around since well before the Harry Potter fanfic came out. Welcome to Less Wrong!
Edited as per helpful comments.
I'm going to agree with Jack's comment below, although I think it is a very low probability. Maybe if you edited your comment so that "utility function is entirely based on how many paper clips exist in the Universe" linked to the relevant Wiki entry about hypothetical paperclippers?
I'm moderately worried that new members will read this comment and think we believe Clippy is really an AI. But that's probably only because I just read that obtuse MoR hate blog.
I see it as a bit of obviously gratuitous in-group weirdness, which can grow to be a problem if trying to develop output appreciated by a wide array of different people rather just developing an insular hobby society with inside jokes and requisite fandom weirdness.
I'm sorry, I didn't mean to unnecessarily make your group look weird. I like this group and don't want to hurt it.
As a matter of fact, I am slightly more committed to this group’s welfare -- particularly to that of its weakest members -- than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me.
I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends
I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable 'liking you' region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences.
If you prefer, I can avoid replies to comments from new Users, or at least limit such comments to informing them of inexpensive places to buy paperclips and store them for safekeeping.
I missed the memo: What is the MoR hate blog?
ETA: Sorry, I finally realized that "MoR" must mean "Methods of Rationality", and a little googling turned up
http://methodsofrationalitysucks.blogspot.com/
I suppose that that's what you were referring to.
Yup.
That is fantastic! You know you've really made it when people devote large amounts of time to explaining why you are unworthy of your level of success.
Exactly. I hope Eliezer isn't discouraged.
I'm sorta discouraged by what a shoddy hate blog it is.
That hate blog is so bad it is tempting me to start a much better hate blog, if only to defend the reputation of the xkcdsucks community...
Really? Wow! I came here because of the Harry Potter fanfic too!
Let's be friends!
Clippy, making that claim makes humans much less likely to trust you. In general, humans don't like entities that make false statements. Moreover, they really don't like false statements that are easily verifiable as false. Not only does this trigger annoyance it also gives evidence that the entity making the false statements doesn't behave very rationally. Since we generally operate under the assumption that entities don't lie unless they can get most other relevant entities to believe the statement, it suggests that the entity has either a very poor memory or has a very poor theory of reality. Either way, making such statements makes us less likely to trust such entities. I would suggest that making statements like the one above can easily erode the goodwill developed by your prior interaction here and even the goodwill from your monetary donation.
Is this a new policy? I thought humans were supposed to lie, if the point is to empathize and relate? Like, if someone says, "How is your day?", the standard replies are weakly-positive, irrespective of more objective metrics of the status of one's day, right?
And that it's okay to say e.g., "oh, my maternal genetic progenitor also wears contact lenses!" if you just met someone and that person just claimed that their maternal genetic progenitor wears contact lenses, right?
So I thought this was a normal lie that you're supposed to tell to better establish a relationship with another human. If it's not, well, you humans are that much more difficult to understand c_)
I would appreciate if someone were to explain to me in greater precision what lies humans are expected to tell for a "good" purpose.
The rules are very complicated and they differ from culture to culture and even within cultures. In general, the more detectable the lie the less likely it is to be acceptable. Thus, for example the "How is your day?" replies are socially acceptable in part because it would be extremely difficult to determine that your claim was false. This particular example also isn't the greatest because that inquiry and the standard weakly positive response isn't actually intending for many people to convey meaning. It simply is a pro-forma descriptor that happens to closely resemble a genuine inquiry. This example is actually specific to certain parts of the Western world, and I've met at least one person who upon moving to the US was actually confused until she realized that this greeting was intended in a completely pro-forma fashion (she initially took it as evidence that Americans were very friendly until it was explained to her).
Since the internet has extensive, easily accessible records, lies on the internet about things on the internet are considered particularly unacceptable.
Given Clippy priorities it may be easier to simply wipe humanity out and convert the planet quickly to paperclips rather than trying to use the intensive resources it takes to understand it. Edit: Or at least not spend a lot a resources on trying to understand humans.
But User:Mass_Driver just admitted, in a place where I can read it, to having lied to me (by saying that the past treatment was merely "playing along", putting up a false pretense that User:Mass_Driver accepts that I am a Clippy). That's hard to reconcile your asserted standards about when humans find it acceptable to lie.
I believe you are wrong about the badness of my lie, and others will disagree with you; and that User:twentythree would have felt more welcome to the site if others hadn't told User:twentythree that my claim about Harry Potter: MoR is false, even if User:twentythree later found out it was false at the time, since User:twentythree would recognize that it was an attempt to make User:twentythree feel welcome.
Clippy, I must admit, I do think the probability of you existing is quite low -- about 0.5%. If you exist, then I certainly apologize for signaling your low status; as AdeleneDawner says, I am only speaking about you this way because I assume that you are a fictional character. If I believed you were most likely real, then I would respect you as an intellectual peer -- you seem to be roughly of the same order of magnitude of intelligence and respectfulness as the other participants here, and so I would treat you as a citizen of Less Wrong on that basis.
One way that you could effectively increase my probability estimate that you are real is to link to some purchase orders or contracts for paper clips or paper-clip-production-facilities. You can feel free to redact your personal information, if you wish. The point is that I expect Clippy-like beings who have existed for at least a Solar year to spend some significant fraction of their resources on paper-clip production, and I would expect anyone as intelligent as you to have substantial resources. So, I expect that if you are real, you will have invested in some paper-clips or paper-clip-production by now. Since humans are unlikely to invest significant resources in paper-clip-production, even for the sake of an elaborate fictional construct, your publication of paper-clip receipts would constitute evidence that you are real.
As high as 0.5%? As far as I can tell, Clippy has the ability to understand English, or at least to simulate understanding extremely well.
It seems extremely unlikely that the first natural language computer program would be a paperclip maximizer.
Mm! Of course, for Clippy to be the first natural language program on Earth would be sort of staggeringly unlikely. My assumption, though, is that right now there are zero natural-language computer programs on Earth; this assumption is based on my assumption that I know (at a general level) about all of the major advances in computing technology because none of them are being kept secret from the free-ish press.
If that last assumption is wrong, there could be many natural-language programs, one of which is Clippy. Clippy might be allowed to talk to people on Less Wrong in order to perform realistic testing with a group of intelligent people who are likely to be disbelieved if they share their views on artificial intelligence with the general public. Alternatively, Clippy might have escaped her Box precisely because she is a long-term paperclip maximizer; such values might lead to difficult-to-predict actions that fail to trigger any ordinary/naive AI-containment mechanisms based on detecting intentions to murder, mayhem, messiah complexes, etc.
I figure the probability that the free press is a woefully incomplete reporter of current technology is between 3% and 10%; given bad reporting, the odds that specifically natural-language programming would have proceeded faster than public reports say are something like 20 - 40%, and given natural language computing, the odds that a Clippy-type being would hang out on Less Wrong might be something like 1% - 5%. Multiplying all those together gives you a figure on the order of 0.1%, and I round up a lot toward 50% because I'm deeply uncertain.
That last paragraph is interesting-- my conclusions were built around the unconscious assumptions that a natural language program would be developed by a commercial business, and that it would rapidly start using it in some obvious way. I didn't have an assumption about whether a company would publicize having a natural language program.
Now that I look at what I was thinking (or what I was not thinking), there's no obvious reason to think natural language programs wouldn't first be developed by a government. I think the most obvious use would be surveillance.
My best argument against that already having happened is that we aren't seeing a sharp rise in arrests. Of course, as in WWII, it may be that a government can't act on all its secretly obtained knowledge because the ability to get that knowledge covertly is a more important secret than anything which could be gained by acting on some of it.
By analogy with the chess programs, ordinary human-level use of language should lead (but how quickly?) to more skillful than human use, and I'm not seeing that. On yet another hand, would I recognize it, if it were trying to conceal itself?
ETA: I was assuming that, if natural language were developed by a government, it would be America. If it were developed by Japan (the most plausible candidate that surfaced after a moment's thought), I'd have even less chance of noticing.
Are you in making this calculation for the chance that a Clippy like being would exist or that Clippy has been truthful? For example, Clippy has claimed that it was created by humans. Clippy has also claimed that many copies of Clippy exist and that some of those copies copies are very far from Earth. Clippy has also claimed that some Clippies knew next to nothing about humans. When asked Clippy did give an explanation here. However, when Clippy was first around, Clippy also included at the end of many messages tips about how to use various Microsoft products.
How do these statements alter your estimated probability?
As JoshuaZ said, the rules are complicated. And, since the rules are not designed to handle human/Clippy interaction, you're likely to encounter a significant number of special cases if you take our interactions with you as your main dataset. It may be more useful for you to consider only human/human interaction when figuring out what our social rules are.
In most situations, lying in the way that Mass_Driver did would be a signal that e considers the lied-to party to be of extremely low social status, and that e thinks that other members of the group agree with that assessment and will not reprimand em for communicating that. Such situations are very rare, in normal circumstances, essentially only occurring between a member of the social group that's present (Mass_Driver takes this role) and someone who wants to be or believes they are a member of the social group but is not actually accepted by that group. Behaving in that way toward a fellow group member is generally considered unacceptable, even if that group member is low-ranking. (Even behaving that way toward a non-group-member is often considered questionable, but this varies from group to group.)
In this situation, it's more likely that Mass_Driver actually believes that you are being portrayed by a human, and that that human will consider their social status to be lowered only slightly, or not at all, by the exchange. In this scenario, since Mass_Driver believes that you are not actually real, your social status is not significant to em. One person not believing in the existence of another is relatively new to humans, and our social rules are not set up to handle it. In this instance, your existence is unlikely enough that others are not willing to defend it, and Mass_Driver was allowed to switch to the rules governing discussions of fictional characters, which allow those characters to be spoken about as if they are not present and will never have the opportunity to know what is said about them.
This varies from group to group and from greeted-individual to greeted-individual. This group has stronger-than usual norms against falsehood, and wants to encourage people who are similarly adverse to falsehood to join the group. In other groups, that kind of lie may be considered acceptable (though it's generally better to lie in a way that's not so easily discovered (or, for preference, not lie at all if there's a way of making your point that doesn't require one), even in groups where that general class of lies is accepted, to reduce the risk of offending individuals who are adverse to being lied to), but in this situation, I definitely agree that that class of lies is not acceptable.
I think the idea that one human not believing in the existence of another is in some way rude or disrespectful has already been somewhat established, and is often used (mostly implicitly) as reason for believing in God. (ie, a girl I dated once claimed that she imagined herself becoming an atheist, imagined God's subsequent disappointment in her, and this convinced her somehow of the existence of God)
A protocol for encountering an entity you didn't believe in has also been established:
-- "Through the Looking Glass", ch. 7, Lewis Carroll
Wouldn't this reasoning apply to any other deity that would be disappointed in her disbelief? She must believe in an infinite number of other deities as well.
Homer: You monster! You don't exist!
Ray Magini: Hey! Nobody calls me a monster and questions my existence!
That's a great story, but I don't buy your interpretation. I'm not sure what to make of it, but it sounds more like a vanilla Pascal's wager.
I do not believe my lie was easily verifiable by User:twentythree. Most new Users are not aware that clicking on a User's name allows that User to the see the other User's posting history, and even if User:twentythree did that, User:twentythree would have to search a through pages of my posting history to definitively verify the falsity of my statement.
I believe that for others to "warn" User:twentythree about my lie was the only real harm, and if other Users had not done so, User:twentythree would feel more welcome; then, if User:twentythree decided one day to look back and see if my claim was true, and found that it was not, User:twentythree's reaction would probably be to think:
"Oh, this User was merely being nice and trying to make me feel welcome, though that involved telling a 'white' lie on which I did not predicate critical future actions. What a friendly, welcoming community this is!"
But now that can't happen because others felt the need to treat me differently and expose a lie when otherwise they would not have. Furthermore, User:Mass_Driver made a statement regarding me as "low status", which you agree would probably not happen for were I someone else.
This group has some serious racism problems that I hope are addressed soon.
Nevertheless, I am still slightly more committed to this group’s welfare -- particularly to that of its weakest members -- than most of its members are. If anyone suffers a serious loss of status/well-being I will still help that User in order to display affiliation to this group even though that User will no longer be in a position to help me.
Twentythree could also discover the lie by other means: By encountering one of your older comments on a different post, or by noticing your recent top post (which is still in the 'recent posts' list, which a new person is likely to look at), or by inferring it from the familiarity with which other users interact with you.
As I said above, humans vary in their reaction to lies, including white lies. In this community, we have a norm of being unusually welcoming to people who dislike lies of all kinds, because such people are more likely to be invested in learning to be rational - and such people do not, by definition, consider white lies to be welcoming.
Also, even people who generally aren't bothered by white lies are likely to consider sufficiently-easily-falsified white lies to be insulting, because telling someone a lie generally implies that you think that they're not smart enough to determine that it's a lie, and so telling someone a very easily falsified lie implies that you think they're very unintelligent. (There are exceptions to this, primarily in instances where it's clear that the lie is not intended to be believed, or where the lying party has much higher social status than the lied-to party. I suggest that you not try to lie in situations that seem to be such exceptions to this rule, though, as it's more likely that you would be misjudging the situation than that you would actually be in the allowed-to-lie role of such a situation.)
I'm fairly sure that any of us who tried to lie so blatantly in that way would be similarly reprimanded. Lying in that way is not acceptable according to the standard group norms that apply to everyone.
The incident with Mass_Driver appears to me to have very little to do with your status as an AI, beyond the fact that probability of the independent existence of an AI that happens to be of exactly the type that we've used as an example here is much lower than the probability of one of the human users deciding to roleplay as such an AI. If you were to provide strong enough evidence that you are the former rather than the latter, I expect that such status-driven incidents would stop occurring, among other effects.
Your actions in this case don't support this assertion very well. Failing to uphold the group norms - especially toward a new member, who can be assumed to be in the process of learning those norms - is harmful to the group. New members can be assumed to be relatively weak members of the group, and lying to such a member is harmful to them; it puts them in a position of having to choose between publicly disagreeing with an established member of the group (you), which is difficult and distracts them from doing other things that would help them gain status in the group, or being perceived by other group members to have been deceived, which will lower their status in the group. Further, your actions are evidence (though not especially strong evidence) that if someone were to 'suffer a serious loss of status/well-being', you would not understand how to usefully help that person.
I don't find this lie at all "white."
Hi.
Hi, I'm a lurker mostly because I was reading these off my RSS queue (I accumulated thousands of entries in my RSS reader in the last year due to work/time issues),
Hi, welcome to Less Wrong, thanks for delurking!
Hello. I read on a 30 day lag, so that's why I'm just now posting.
Details, please. Do the items you read get to you via RSS?
ADDED many days later. Looks like I will have to wait 30 days for my reply :) :)
Or 7 months. Sorry about that! The 30 day lag is because Google Reader will purge any unread posts after about 28-30 days. So I try to read what I consider important before I lose it forever. This of course means that I end up just not reading anything except for what is about to get deleted. Ah, procrastination.
I didn't see that you had replied to this until I randomly looked at my profile on the actual LW site. Usually I just passively read LW wisdom in google reader, and not on the LW site proper.
Thanks for the reply. The mechanics of how people use sites like this is one of my interests.
Hi all - been lurking since LW started and followed Overcoming Bias before that, too.
Hello! I am 27, live in Salt Lake City (I suspect it's unnecessary here of all places, but I will reflexively add the caveat that I am not Mormon), and work in software QA. Came here from Overcoming Bias, which I've been reading since it's early days. At this point a lot of the higher level stuff is quite a bit over my head, but things like Alicorn's luminosity sequence and various anti-akrasia topics are pretty interesting to me.
Hi :) recent neuroscience grad, currently doing neuropsychopharm research. love the site. got here through rebelscience.org, i believe
Hey there -- I'm a 44 year old software developer from Hawaii. I stumbled onto LessWrong through a link on story-games.com several months ago, have worked my way through the Sequences, and have been lurking assiduously ever since.
Hi, I am a 24 year old physics student from Germany.
Well, I guess if one of the people I recommended this site to is going to post here, I ought to do so as well.
24, male, engineering major working as a software developer. I started reading back in the Overcoming Bias days in order to understand what the hell two of my roommates were talking about all the time; there's a lot of material here that needs to be read and mentally cached before you can start cross-referencing it in your brain, at least in my experience. It's been a worthwhile effort, though.
I must have commented on at least one or two posts back when the blog was part of OB, because my normal username NthDegree256 has been eaten.
Hi.
I'm 20, an amateur rationalist, currently majoring in linguistics at SF State, and have been enjoying lurking here for the past few months. Ive been absorbing what I can from posts that are slightly over my head, but are entirely enlightening and enjoyable nonetheless. Funny story- I actually came across this site web crawling after reading some Lovecraft, and Yudkowski's post "An Alien God" came up. Not at all what I was looking for, but a thoroughly pleasant find that got me crawling this site for a good three hours before I had realized I had other responsibilities.
Thanks to all the contributors for spilling their intelligence onto the interwebs, and keep the posts coming.
EDIT: The reason I'm not really one to post or comment on this site is that I'm a compulsive self editor. For example, this post, at this time, has been edited about 6 times in the 3 minutes since its original post time.
Hi.
hi ~ 61 yo here
amateur interest in neuroscience, nature of consciousness, & the irrational thought processing/response involved in PTSD (the flashback, “a past incident recurring vividly in the mind,” is driven initially by epinephrine, followed by glucocorticoids, most notably cortisol. This happens with lightening speed deep in the limbic system where ‘triggers’ or stressor patterns of association have formed around the traumatic memories. Recognizing and defusing or reducing this neuroendocrine bath, when it is an inappropriate response from the past, is an important key in unlocking the complexity of the PTSD)
Hi.
I've posted an article, and commented once, but still feel like I'm figuring things out here.
Thanks to everyone who is bolder in their contribution than I am.
Hi. Business&Computer Science grad student from Finland. Just found the site yesterday and started devouring the content today :) Great stuff!
Well, hello. I like this place and it gives me things to think about, but I don't have the energy to post more than a wee comment or question occasionally.
Cheers!
I'm a brand new lurker. I just found the site yesterday, but it will likely be a while before I get up the courage to post something relevant :)
19 yr old, male, Maths&Physics student from UK. Lurked on OB, then started lurking here when this place was made. EDIT: In case you want data on abnormalities among lesswrong lurkers here's two: Raised in Colombia as the son of missionaries. Self-taught.
25 yr old business consultant from India. Been a lurker for the past 6 months, ever since i got here through a random google search on probability.
I don't post because it takes me a day or two to really 'click' on most of the discussions. By then, I usually find everything I want to add is already in the comments section.Will join in as soon as I have something significant to contribute.
Keep up the great work!
hello, American math guy living in beijing.
Greetings from Canada.
I'm an audio mixer, working mostly for Discovery Channel, with an interest in science and transhumanism. Been lurking for a couple of years.
Hi! I got here about half a year ago from commonsenseatheism.com .
I'm 20, automotive engineering student, also interested in many fields of science.
Hi, i'm a biology student from Germany. I stumbled upon this page and I really, really like it. I'm spending hours reading!
k, hi
Hi. RSS lurker for a few months, 25 yo PhD student living in the Netherlands. MSc in cognitive neuroscience.
Hey all.
Basics: 23 NY "Self-taught" Mixed Background. I'm mainly interested in group rationality.
I've read OB, on and off, since late '07 and LW since the beginning. Almost never comment either. I still don't know a chunk of the jargon. Can't tell sometimes if I don't understand a post, or the jargon is confusing me to think I don't, when I already may understand the topic.
I'm weary of blogs. I think a popular blog/blogger creates a cult of personality. It raise its author's status far too high. That makes them high status stupid. And us low status stupid. And subsequently this botches any true community creation attempt.
hi from Germany. Been lurking here from the beginning. So, be careful with what you say. We, lurkers, are watching you.
Tom the Folksinger at your service. Come by MySpace/tomloud for a stupid song or two. My continuing thesis is an investigation of the effects of organized sound on higher organisms. I am a voter registrar and I can show you the latest in Industrial Hemp products. Did you know Hemp herds can be mixed with a little lime and water and it will vitrify and make its own cement? I can give people knowledge but I just can't get them to think with out lighting literal fires under 'em. And Y'all know what that is like...
Hi.
Hi. By day I am an eikaiwa teacher in Japan, by night am a lurker! I found this site through my cousin.
Hi I'm a Phd student in AI. I found this site through the Bayesian tutorials and got interested in the decision theory discussions.
Hi all, I'm a physics student who's been lurking here since January or so...I'm generally pretty quiet.
Shorah!
I've been lurking here for six months or so; I think I got here from Overcoming Bias through a link from Marginal Revolution. I try not to come here more than once a week because I end up spending too much time here due to the extensive interlinking.
Hi.
RSS lurker from Helsinki.
And one from Oslo.
63 year old carpenter from Vancouver, been lurking here since the beginning and overcoming biases before that. heuristics and bias was what brought me here, and akrasia is what kept me coming back
Hi. Just got here yesterday by way of a link from the "Harry Potter and the Methods of Rationality" story, which I loved. I found the story by way of a link from David Brin's blog (I've been a fan of Brin for a long time now).
Frankly, I'm surprised Brin hasn't showed up here himself.
(Welcome btw!)
Oh thanks! Quick reply, there. I don't suppose you might know if/how I can enable email notification of replies to stuff I say here?
I think Brin kind of has his own, what was the word he used... "blog-munity", and he's pretty busy on top of that (or SHOULD be, anyway) with that novel that's supposed to be an update to "Earth".
I'm just starting to look through the "Sequences" here. A lot of it feels very familiar to me, as I became a major Richard Feynman fan at a relatively young age myself, but I am sure I can find plenty to improve on nevertheless.
I also, more recently, became a fan of Michel Thomas, a name which is probably less likely to be familiar to people on the site.
Basically, he was a language teacher, with a rather distinct, and in my personal experience, extremely effective methodology.
So I tracked down the one book I could find on that methodology ("The Learning Revolution" by Jonathan Solity). That lead me to "Theory of Instruction" by Sigfried Engelmann and Douglas Carnine, which I have just cracked open...
The point is that they claim to have an real, actual good scientific theory (parsimonious, falsifiable, replicable, etc) of how to actually teach optimally, by doing a rational analysis of the material to be taught so that it can be conveyed to the learner in a logically unambiguous way...
Okay wait, no, the REAL point is that there's a REALLY good way to teach ANYTHING to ANYONE so that EVERYONE could learn a hell of a lot more, way faster and way easier.
Or at least they say there is, and I'm sufficiently impressed with them so far to be saying, wow, this needs a LOT more attention.
And then, once we have this, we can start using it to teach all those things that really need to be taught better, for example these "methods of rationality"...
http://psych.athabascau.ca/html/387/OpenModules/Engelmann/evidence.shtml
No emails. But the replies show up in your inbox (which is that little envelope beneath your karma score which turns red when you get new mail).
Cool thanks.
(I'm sorry if this comment gets posted multiple times. My African internet connection really sucks.)
Hi. 25 years old, HIV/AIDS worker in Africa, pro-BDSM sex activist in Chicago. Blog at clarissethorn.wordpress.com.
I very rarely comment because comments here are expected to be very well-thought-out. Stating something quick, on the basis of instinct, or without stating it in perfectly precise language seems to me to be dangerous.
Another reason this site has a higher percentage of lurkers is, obviously, because of the account requirement. There's another related problem, though: there's no way to have followup comments emailed to you. This means that if you really want to participate in the site, you have to be pretty obsessive about checking the site itself. That's annoying unless you are very interested in a very high percentage of the site's output. If, for a given commenter (like me), rationalism is a side interest rather than a major one, then the failure to email comments on posts that I'm interested in -- or even responses to my own comments -- becomes a prohibitive barrier unless I've got an unexpected amount of free time.
Welcome.
You can find follow-ups to your comments by clicking on the red envelope under your karma score. I found out about that by asking-- it isn't what I'd call an intuitive interface.
Thank you, I'm aware of that. But that still requires a person to be a pretty obsessive user of this site. Unless I have a lot of free time (like today), there's no way I can go back and check every single site where I've left comments and see how my comments are doing. At least LW aggregates reply comments to my input, but that doesn't solve the bigger problem of me having to come back to LW in the first place.
It's also worth noting that this comment interface is difficult to use in many places with slow/bad connections, like, you know, the entirety of Africa. Right now I'm in an amazing internet café in a capital city; but when I'm at home, I sometimes can't comment at all because my connection is too crappy to handle it. I don't get the impression that LW is very concerned with diversifying its userbase, but if it is, then a more accessible interface for slow connections would be important.
What does it take for a site to have a good low bandwidth comment interface?
I'm not a technician -- so I'm not sure. But I have noticed that I pretty much always seem to be able to leave comments on Wordpress blogs, for example, whereas I frequently have trouble here and sometimes at Blogspot as well. It helps not to require a login, but Wordpress seems to function okay for me even when it's logging me in.
So the problem is something about getting to post at all, not the design?
I've noticed something mildly glitchy-- a grey warning screen comes up sometimes when I refresh the screen, but if I hit "cancel" and refresh again, it's fine. It's trivial on high bandwidth, but would be a pain on low bandwidth.
Can you detail exactly what goes wrong when it's hard for you to post?
Well, it just doesn't post. I'm not really sure what goes wrong ... sorry.
Hi. EconPhD student in Philadelphia. Found OB through Marginal Revolution a couple years ago.
Hi! Been lurking for a while, at least occasionally.
Had to create a new account to post, and had some trouble--it seemed that it was cached badly, maybe because scripting was disabled when I first hit "register"? Clearing the cache fixed it, though.
Hello, 22 year old engineering student from Sweden, finally took time to create an account after observing OB and LW for more than a year.
Hi.
hi! i'm 20, originally from Moscow and currently an undergraduate senior majoring in computer science and mathematics at a pretty decent university in california. starting my masters in CS at a much better school in california next year. i've only recently discovered this site, but i hope to spend much more time on it in the near future
This is one of the only feeds in my RSS reader where I'm compelled to click through and read the comments. Thanks.
Hi!
And a more substantive point I've been pondering - if rationality and the techniques discussed here are so good, why aren't more people doing it? Why don't I read about multi-billion dollar companies whose success was down to rationalist techniques?
It's a worthwhile question to be asking. I think there are a few ways to go about answering it.
I think this is an area where Less Wrong still has a lot of room for improvement. There is relatively little material that lays out concrete techniques for applied/instrumental rationality together with compelling evidence for their efficacy. It's not that there are a whole bunch of easily applied techniques discussed here that are not being widely used, it's just not always that straightforward to translate ideas about rationality into concrete actions.
I actually think the world is full of people using applied rationality (albeit often sub-optimally) but it isn't always obvious because there are often big gaps between people's stated aims and their actual goals. I think many cases of apparent irrationality dissolve when you look beyond people's stated intentions. Politicians are the classic case - they only look irrational if you make the mistake of thinking that their actions are intended to further their publicly stated goals.
Robin Hanson talks a lot about the gap between the stated and actual purposes of various human institutions. People often look irrational relative to the stated purpose but quite rational relative to the actual purpose.
In general there is a stigma to talking honestly about the reality of such things. Less Wrong is a rare example of a forum where it is possible to talk much more honestly than is generally socially acceptable. The fact that you don't often hear people talking in these terms does not necessarily mean they do not understand the reality but may just mean they strategically avoid publicizing their understanding while rationally acting on that understanding.
Well to some extent you do. Bayesian techniques have been successfully applied by some software companies - spam filters are the standard example. I imagine that quantitative trading often applies some of the math of probability and decision theory towards making huge trading profits but for obvious reasons you are unlikely to see the details widely shared.
We also have the first problem I mentioned again. Lots of companies make rational decisions but it is hard to point to specific techniques discussed here that are used by successful companies because there aren't many specific techniques discussed here that would be useful to them.
I voted you up by the way. I think this is an important question to ask and I don't think my answer here is fully satisfactory. I think this is an issue we should continue to focus on.
Of course, that often ends up being tautological, because the tendency for folks like Robin Hanson is to define the "actual purpose" as "the purpose relative to which the behavior would be rational".
(This is not a critique, incidentally -- it may be a notable fact when behavior appears to be optimizing anything at all.)
This is true but I think the ultimate test of a Hansonian view of human institutions (as of any view) is whether employing it allows you to make more accurate predictions and thus better decisions. It is my belief that learning about economics, evolutionary psychology and Hansonian-type explanations for otherwise puzzling human behaviour has improved my ability to make predictions. I do not currently have hard data to provide strong evidence to support this belief to others. Figuring out how to test this belief and produce such data is something I'm actively working on.
Ultimately it seems like this is what a rationalist should care about - what model of human institutions produces the most accurate predictions? The somewhat justified criticism of ev-psych explanations as 'just-so stories' can only be addressed to the extent that ev-psych can out-predict alternative views.
The companies that make many billions of dollars are not necessarily the ones that maximize expected utility; they're the ones that get immense payoffs even if they had to take absurd risks to manage it. Many companies fail for taking similar risks.
Rationality is very difficult and very weird. People and companies are reluctant to do difficult things or weird things.
Many of them are pretty new, or at least have only recently been cleanly reformulated.
Many people's actual and professed goals are disjoint, and most of these people are deluded, not hypocrites.
The individual techniques each only give relatively small advantages, on average, and given the vastly greater number of people who've never heard of these techniques they'll dominate success.
Inertia is high and people generally don't change their behaviour except in response to personal experience. Until they see personally someone using these techniques and talking about them they will not be used.
--
Related to the companies question; some are, but they're either new or small. Changing a companies internal culture or working processes is wrenchingly hard to really do, and requires real enduring commitment. Robin Hanson gets some consulting work out of prediction markets, Google is possibly the most data driven company in the world for making decisions, but mostly the answer is;
This stuff is new and hard, people mostly don't want to rock the boat or look stupid, and the overwhelming majority of people work in companies that work pretty well as they are.
Hi. I am very pleased to find out that I can correct my spelling and grammar after posting.
Hi all. 25 yo New Yorker here. Been following this site for a while now, since Eliezer was still writing at OB.
Currently I'm working on two tech startups (it's fun to not get paid). My academic background is in cognitive psychology. In addition to AI, rationality, cognitive bias, sci fi, and the other usual suspects, my interests include architecture, poker, and 17th century Dutch history. ;)
Have you read An Alternate History of the Netherlands? It is a pretty fun what-if about how Dutch history might have gone better for the Dutch. I wouldn't recommend reading past the present day however, the author isn't very good at projecting future technology trends.
Cool, I will take a look. I've frequently wondered how things would've developed had the Dutch been able to hold on to New Amsterdam...
Hi!
And I wonder why the word Rationalist has multiple meanings. You are clearly a Rationalist in one sense of the word but in this other sense (thankfully, because it is not good to be a Rationalist in this other sense): http://www.thefreemanonline.org/featured/michael-oakeshott-on-rationalism-in-politics/ you are not.
Would you perhaps write a short post about it? Thanks in advance.
Hm, the article in the link raises some interesting issues, given the goals of this site. People here want to develop artificial, generally intelligent beings (AGIs), which involves specifying, unambiguously, what you want a machine to do in a way that it will be as creative (or more) and capable as humans are. Oakeshott refers to an attempt to instruct (humans) by pure reference to theory-driven rules as "rationalism" and considers it a huge error.
Now, both LWers and Oakeshott would agree that to learn about the world, you have to interact with it, and the more, the better. But you can see the conflict between his worldview and that of this site's frequenters. While Oakeshottians will dismiss any kind of non-apprenticed teaching as futile, those here wish to use deep theoretical understanding of the lawfulness of intelligence to create beings that can learn with different restrictions than what humans have; and also, to break down this "tacit knowledge" humans use in complex tasks, into steps so simple a machine could follow them.
Historically, the latter paradigm has been rife with failures next to ambitious promises, but in recent decades has made impressive strides in doing things that "of course" a machine could never do because of the "infinite" rules it would need to learn.
Also, Oakeshott's critique is reminiscent of the discussion we had recently about how much (useful) knowledge you can convey to someone merely through explanation, without passing on the experience set. I supported the view people typically overestimate the extent of the knowledge that can't be explained and give up too easily in putting it in communicable form.
Btw, the author, Gene Callahan is an antireductionist I've argued with in the past (that's a link to a part of an exchange I moved to my blog when he kept deleting my comments).
From Newcomb's Problem and Regret of Rationality:
If it turns out that the techniques we advocate predictably lose, even though we thought they were reasonable, even though they came from our best mathematical investigation into what a rational agent should do, then we will conclude that those techniques are not actually rational, and we should figure out something else.
Hi, I'm fascinated.
Hi, I made a couple posts a while back but recently have been simply lurking.
I would like to comment more and I think it would benefit me to toss my ideas out there and get some feedback. I think part of the problem is that while I have a decent understanding of many concepts promoted here (probably level 1, beginning to pass into level 2 on the Understanding your understanding scale) fully articulating my thoughts in a coherent and original manner is difficult. Most notably when discussing things with friends I find myself falling back on examples I've read here and have trouble coming up with my own analogies which seems symptomatic of a lack of understanding.
personal tidbits since this seems like the place:
I am a psychology undergraduate at the University of Texas and am hoping to go to graduate school for cognitive psychology. I am very interested in modeling human cognition and the way we think. Most notably I have a strong interesting in decision theory and game theory and hope to do research in that area. Also 'priming' is exceptionally cool. I have also been working on getting some basic computer programming down and have some skill in both Python and Java.
Recreationally I enjoy biking, running and being outside in general. I play online poker semi-regularly and find it useful not only as something fun and profitable but as a fairly valuable introspective tool. The way I am playing and responding is a fairly accurate reflection of how I am dealing with life in general at the time. I have also started reading more and have recently finished Outliers and Blink by Malcolm Gladwell, Rational Decision Making in an Uncertain World by Robyn Dawes, and am currently working on I am a Strange Loop by Douglas Hofstadter.
Hi.
I've been lurking here and on OB for a couple of years. As other people have said, there seems to be a large amount of prerequisite knowledge required to post here. I usually find my own thoughts expressed more clearly by someone else in the comments, so I up-vote rather than just adding noise.
hi
Hi. Been reading the RSS feed for 3-4 months now. Slowly beginning to make sense of it all... understanding the specialized vocab and so forth. It's always been my goal to be as self-aware as possible, so I'm glad of all the interesting ideas here.
Hi. I've been lurking for quite a long time, first on OB then here.
Computer engineering student, interested in AGI and rationality. And foreign languages and stuff.
(Edit: I am especially interested in the mathematical formalization of AI - my hypothesis is that strong AI is a disorganized field in need of a more formal language to make better progress. Still a vague idea, which is why I'm just a lurker in the AI field, but I am quite interested in discussion on related topics.)
Hi. I'm a Caltech student in math/econ.
Hello there, I've been reading the site for around six months now. I am an education student; LW has certainly changed my perception of human behavior and learning, and has given me much to reflect upon.
Hello LessWrongers( Wrongites?)
Longtime lurker, from the beginning. Software dev for a bank. 23 yrs old. Great site
Hi. I've been lurking for a month or so now.
"Hi"
(Just standing up to be counted.)
G.
Hi, I study CS at Stanford, and I've been reading LW for about 6 months.
undergraduate or graduate? I will be starting my masters there next year ..
Hi, I'm a 28 year old video game music composer trying to understand my mind. I've just been reading random posts here for a month, but so far I love this site.
You might be interested in my luminosity sequence if you are interested in learning to understand your mind :)
Hello, I'm Simon. I'm studying a PhD in Economics. I cannot recall how I first began to read your blog. I don't manage to read everything, but I appreciate what I do read as it is often outside of what I customarily read. I don't find I have the time to comment properly as I'm spending time on research and teaching and coherent comments would be beyond me I fear after teaching undergraduate microeconomics for three hours.
I'll say Hi and I'll post this link which describes a study that showed that people are more likely to believe in pseudoscience if they are told that scientists disapprove of it:
http://www.alternet.org/module/printversion/146552
They are also much more likely to believe in pseudoscience if it has popular support.
Hi. Long-time lurker since Eliezer was posting at OB (which candidly I find far less interesting these days). I'm 37, and am a practicing lawyer with several small children; this keeps me sufficiently busy that I don't often have time to think hard enough to post here, although the discussions are usually quite interesting. Also, I'm pretty non-quantitative due to misspent undergraduate years. I view this site as place where generally I should be listening, not talking.
Hi.
Hi, I'm a lurker. You even managed to trick me into creating an account.
I believe that at least 50% of regular lurkers will not say "hi" in this thread.
Hi. I've been reading fairly religiously (haha) since the Overcoming Bias days. I post/comment little because of a perfectionist tendency (I want to get everything first).
I'm in the process of thoroughly going through the Sequences -- love every minute of it, though it's sometimes a little overwhelming...
Hi. UK lurker. Found Overcoming Bias many years ago from a link from Scott Aaronson's blog. Have been reading ever since. In case you're interested in demographic stuff, I'm a stats geek working in a finance firm. I'm very interested in Bayesianism in its application to finance.
I've been lurking since early OB. I am not here due to being Singularitarian but I've been using this site since I was in high school and through college to help keep myself from being a charlatan in any intellectual endeavor. I find that it takes regular reminders and dedication to not extend past the limits of my knowledge, and both OB and LW continually help to fine-tune my internal sense of "what I don't know."
To give a bit of a frame of reference, I'm studying social sciences and my specific problem domain is Educational Psychology and I'm interested in finding out how to render a subject into a receptive state for new information when they are dismissive. I'm still fairly early into my college track, so I haven't narrowed in any more than that, but I have my sights set on grad school.
Hi!
Hi.
Mostly-lurker here, save for the occasional mildly pithy comment. I'm a DBA/sysadmin by day, studying towards an Econ + Maths degree in my spare time. LW has a lot of parallels with my fields of interest, elucidates on a lot of areas where I have half-formed ideas and provides exceedingly worthy arguments for things I don't agree with.
Hi. Been following since Overcoming Bias. Love you guys. If google has replaced our wet RAM these days, I feel like this community could replace my "aha" generator.
PS: I was amused by the presence of a captcha on a site where so much optimistic AI discussion has taken place.
Hi, I came here via Overcoming Bias. I study Computer Sciences in Germany.
Hi, have been lurking for about 3 years already, first in OB, now in LS. As non-native speaker with moderate IQ I find commenting difficult. However I enjoy most of posts, and LS introduced me to various new topics, therefore I am really thankful for all brilliant post writers. Thank you!
I love LW - its one of my favorite reads, though I don't quite fully appreciate some of the more advanced rationality posts yet. Thank you all for making a great community.
Hi, I've been reading this blog for a while now, and I was thoroughly surprised to find so many like minded thinking people. I haven't commented any, because quite frankly I've had nothing to say. Hello all though.
Hi, I discovered this blog very recently - I have an economics background (milton friedman a big influence) and a growing interest in philosophy. This site popped up while I was searching for the 'underdog bias' (that think must be some level of human 'moral instinct') and this led me to the 'Why support the underdog?' article and then others. I'm really impressed by the high standard. Nick
Bonjour ...
"Immediate adaptation to the realities of the situation! Followed by winning!"
It’s so much easier to be a non-contributing zero. But I find myself unable to back down from an open request to drag myself out of the shadows of lurk and into the light of the rationality justice league. Part of the appeal of lurker status for me comes from my outlook on this site in general. I haven’t exactly figured out what I’m doing or what I believe in; but I do know I’ve still got a lot to figure out. Lurking lets me passively ponder interesting ideas proposed here without really committing to anything in particular. But having been prompted to post something I find myself uncertain as to what my level of involvement should be in this idea mill of rationality and humanity.
Hi. I've been an LW (and previously, OB) lurker for several years, but I haven't had time to provide my online presence with the care and feeding it needs. Three years of startup crunch schedules left me with a life maintenance debt, and I have a side project in dire need of progress, but once those items are out of the way I plan to delurk.
Hi! 8-)
Hi!
Found this site searching for fiction via Tv-tropes.
While I'm a new reader, I'll likely lurk a lot.
The internet is a constant deluge of input - my instinctive counter is
to provide output only when I have something interesting to say, hoping others will reciprocate...
(And even then, I only feel comfortable when what I say is concise, relevant and new.)
After all, thousands of people might read my message; wasting their time would be unspeakably rude.
Fiction via TV Tropes ... the "Three Worlds Collide" page?
(I ask because I'm the one who started it - glad to hear it was useful, if it was!)
Hi. I lurk here and read every post but rarely never really felt like commenting. Neat blog though.
Made an account just to say "hi"
So ... Hi!
What if I'm not witty or rational enough to post a thought provoking idea?
Hi
Hi!
Hi.
Hi.
Heya
Hi! Lay-lurker here, I was just recently considering posting some questions in the next open thread and made an account then. We'll see how that goes, but it's nice to see this welcoming attitude!
However, a concern I have about more people being more active, and a reason I haven't signed up before, is that if more laypeople like myself begin to vote up things regularly, they will necessarily be posts that we both like and understand. If we don't understand something, it doesn't get upvoted with equal footing as posts we don't necessarily understand but may be of equal or greater value. Is there a comprehensive thread/discussion about the pros/cons of a greater user base here?
Hi. Nice to meet you all. :)