chaosmosis comments on Rationality Quotes April 2012 - Less Wrong

4 Post author: Oscar_Cunningham 03 April 2012 12:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (858)

You are viewing a single comment's thread.

Comment author: chaosmosis 18 April 2012 05:29:45PM *  8 points [-]

"When I was young I shoved my ignorance in people's faces. They beat me with sticks. By the time I was forty my blunt instrument had been honed to a fine cutting point for me. If you hide your ignorance, no one will hit you and you'll never learn."

-- Farenheit 451

I'll be sticking around a while, although I'm not doing too well right now (check the HPMOR discussion thread for those of you interested in viewing the carnage, it's beautiful). It's not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across. Plus, I like the idea of losing so much karma in one day and then eventually earning it all back and being recognized as a super rationalist. Gaining the legitimate approval of a group who now have a lot against me will be a decent challenge.

Also I doubt that I would be able to resist commenting even if I wanted to. That's probably mostly it.

Comment author: MixedNuts 20 April 2012 05:48:27PM *  25 points [-]

Tips for dealing with people with big egos:

  • Don't insult anyone, ever. If Wagner posts, either say "Hmm, why do you believe Mendelssohn's music to be derivative?" or silently downvote, but don't call him an antisemitic piece of shit.
  • Attributing negative motivations (disliking you, wanting to win a debate, being prejudiced) counts as an insult.
  • Attributing any kind of motivation at all is pretty likely to count as an insult. You can ask about motivation, but only list positive or neutral ones or make it an open question.
  • Likewise, you can ask why you were downvoted. This very often gets people to upvote you again if they were wrong to downvote you (and if not, you get the information you want). Any further implication that they were wrong is an insult.
  • Stick closely to the question and do not involve the personalities of debaters.
  • Exception to the above: it's okay to pass judgement on a personality trait if it's a compliment. If you can't always avoid insulting people, occasionally complimenting them can help.
  • A lot of things are insults. You will slip up. This won't make people dislike you.
  • If you know what a polite and friendly tone is, have one.
  • If someone isn't polite and friendly, it means you need to be more polite and friendly.
  • If they're being very rude and mean and it's getting annoying, you can gently mention it. Still make the rest of your post polite and friendly and about the question.
  • If the "polite and about the question" part is empty, don't post.
  • If you have insulted someone in a thread - either more than once, or once and people are still hostile despite you being extra nice afterwards - people will keep being hostile in the thread and you should probably walk away from it.
  • If hostility in a thread is leaking into your mood, walk away from the whole site for a little while.
  • When you post in another thread, people will not hold any grudges against you from previous threads. Sorry for your epic quest, but we don't have much against you right now.
  • Apologies (rather than silence) are a good idea if you were clearly in the wrong and not overly tempted to add "but".

On politeness:

  • Some politeness norms are stupid and harmful and wrong, like "You must not criticize even if explicitly asked to" or "Disagreement is impolite". Fortunately, we don't have these here.
  • Some are good, like not insulting people. Insulting messages get across poorly. This happens even when people ignore the insult to answer the substance, because the message is overloaded.
  • Some are mostly local communication protocols that help but can be costly to constrain your message around. It's okay to drop them if you can't bear the cost.
  • Some are about fostering personal liking between people. They're worthwhile to people who want that and noise to people who don't.
  • Taking pains to be polite is training wheels. People who are good with words can say precisely and concisely what they mean in a completely neutral tone. People who aren't are injecting lots of accidental interpersonal content, so we need to make it harmless explicitly.

People who are exempted:

  • The aforementioned people, who will never accidentally insult anyone;
  • People whose contribution is so incredibly awesome that it compensates for being insufferable; I know of a few but none on LessWrong;
  • wedrifid, who is somehow capable of pleasant interaction while being a complete jerk.
Comment author: TheOtherDave 20 April 2012 07:12:29PM *  6 points [-]

I'll add to this that actually paying attention to wedrifid is instructive here.

My own interpretation of wedrifid's behavior is that mostly s/he ignores all of these ad-hoc rules in favor of:
1) paying attention to the status implications of what's going on,
2) correctly recognizing that attempts to lower someone's status are attacks
3) honoring the obligations of implicit social alliances when an ally is attacked

I endorse this and have been trying to get better about #3 myself.

Comment author: Wei_Dai 20 April 2012 08:53:15PM 11 points [-]

The phrase "social alliances" makes me uneasy with the fear that if everyone did #3, LW would degenerate into typical green vs blue debates. Can you explain a bit more why you endorse it?

Comment author: TheOtherDave 20 April 2012 11:10:33PM 7 points [-]

If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam's ability to engage in A...
...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.
...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance.
...if on reflection I reject A and I can't come to agreement with Sam, I endorse acknowledging that I've unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that's beside the point here.)

I agree with you that if I instead skip the reflective step and reflexively endorse A, that quickly degenerates into pure tribal warfare. But the failure in this case is not in respecting the alliance, it's failing to reflect on whether I endorse A. If I do neither, then the community doesn't degenerate into tribal warfare, it degenerates into chaos.

Admittedly, chaos can be more fun, but I don't really endorse it.

All of that said, I do recognize that explicitly talking about "social alliances" (and, indeed, explicitly talking about social status at all) is a somewhat distracting thing to do, and doesn't help me make myself understood especially well to most audiences. It was kind of a self-indulgent comment, in retrospect, although an accurate one (IMO).

(I feel vaguely like Will_Newsome, now. I wonder if that's a good thing.)

Comment author: wedrifid 21 April 2012 06:05:17AM 16 points [-]

I feel vaguely like Will_Newsome, now. I wonder if that's a good thing.

Start to worry if you begin to feel morally obliged to engage in activity 'Z' that neither you, Sam or Pat endorse but which you must support due to acausal social allegiance with Bink mediated by the demon X(A/N)th, who is responsible for UFOs, for the illusion of stars that we see in the sky and also divinely inspired the Bhagavad-Gita.

Comment author: TheOtherDave 21 April 2012 03:20:55PM 3 points [-]

Been there, done that. (Not specifically. It would be creepy if you'd gotten the specifics right.)
I blame the stroke, though.

Comment author: wedrifid 21 April 2012 05:54:06PM 7 points [-]

Been there, done that. (Not specifically. It would be creepy if you'd gotten the specifics right.) I blame the stroke, though.

Battling your way to sanity against corrupted hardware has the potential makings of a fascinating story.

Comment author: TheOtherDave 21 April 2012 06:56:08PM 7 points [-]

It wasn't quite as dramatic as you make it sound, but it was certainly fascinating to live through.
The general case is here.
The specifics... hm.
I remain uncomfortable discussing the specifics in public.

Comment author: Wei_Dai 21 April 2012 12:43:34AM *  4 points [-]

if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.

Is establishing yourself as a reliable ally an instrumental or terminal goal for you? If the former, what advantages does it bring in a group blog / discussion forum like this one? The kind of alliance you've mentioned so far are temporary ones formed implicitly by engaging someone in discussion, but people will discuss things with you if they think your comments are interesting, with virtually no consideration for how reliable you are as an ally. Are you hoping to establish other kinds of alliances here?

Comment author: TheOtherDave 21 April 2012 01:06:07AM 2 points [-]

Is establishing yourself as a reliable ally an instrumental or terminal goal for you?

Instrumental.

If the former, what advantages does it bring in a group blog / discussion forum like this one?

Trust, mostly. Which is itself an instrumental goal, of course, but the set of advantages that being trusted provides in a discussion is so ramified I don't know how I could begin to itemize it.
To pick one that came up recently, though, here's a discussion of one of the advantages of trust in a forum like this one, related to trolley problems and similar hypotheticals.
Another one that comes up far more often is other people's willingness to assume, when I say things that have both a sensible and a nonsensical interpretation, that I mean the former.

The kind of alliance you've mentioned so far are temporary ones formed implicitly by engaging someone in discussion, but people will discuss things with you if they think your comments are interesting, with virtually no consideration for how reliable you are as an ally.

Yes, I agree that when people form implicit alliances by (for example) engaging someone in discussion, they typically give virtually no explicit consideration for how reliable I am as an ally.

If you mean to say further that it doesn't affect them at all, I mostly disagree, but I suspect that at this point it might be useful to Taboo "ally."

People's estimation of how reliable I am as a person to engage in discussion with, for example, certainly does influence their willingness to engage me in discussion. And vice-versa. There are plenty of people I mostly don't engage in discussion, because I no longer trust that they will engage reliably.

Are you hoping to establish other kinds of alliances here?

Not that I can think of, but honestly this question bewilders me, so it's possible that you're asking about something I'm not even considering. What kind of alliances do you have in mind?

Comment author: Wei_Dai 22 April 2012 02:19:03AM 1 point [-]

To pick one that came up recently, though, here's a discussion of one of the advantages of trust in a forum like this one, related to trolley problems and similar hypotheticals. Another one that comes up far more often is other people's willingness to assume, when I say things that have both a sensible and a nonsensical interpretation, that I mean the former.

It's not clear to me that these attributes are strongly (or even positively) correlated with willingness to "stick up" for a conversation partner, since typically this behavioral tendency has more to do with whether a person is socially aggressive or timid. So by doing that, you're mostly signaling that you're not timid, with "being a good discussion partner" a much weaker inference, if people think in that direction at all. (This is the impression I have of wedrifid, for example.)

What kind of alliances do you have in mind?

I didn't have any specific kind of alliances in mind, but just thought the question might be worth asking. Now that I think about it, it might be for example that you're looking to make real-life friends, or contacts for advancing your career, or hoping to be recruit by SIAI.

Comment author: wedrifid 22 April 2012 02:22:44PM 2 points [-]

It's not clear to me that these attributes are strongly (or even positively) correlated with willingness to "stick up" for a conversation partner, since typically this behavioral tendency has more to do with whether a person is socially aggressive or timid. So by doing that, you're mostly signaling that you're not timid

This model of the world does an injustice to a class of people I hold in high esteem (those who are willing to defend others against certain types of social aggression even at cost to themselves) and doesn't seem to be a very accurate description of reality. A lot of information - and information I consider important at that - can be gained about a person simply by seeing who they choose to defend in which circumstances. Sure, excessive 'timidity' can serve to suppress this kind of behavior and so information can be gleaned about social confidence and assertiveness by seeing how freely they intervene. But to take this to the extreme of saying you are mostly signalling that you're not timid seems to be a mistake.

In my own experience - from back when I was timid in the extreme - the sort of "sticking up for", jumping to the defense against (unfair or undesirable) aggression is one thing that could break me out of my shell. To say that my defiance of my nature at that time was really just me being not timid after all would be to make a lie of the battle of rather significant opposing forces within the mind of that former self.

(This is the impression I have of wedrifid, for example.)

Merely that I am bold and that my behavioral tendencies and strategies in this kind of area are just signals of that boldness? Dave's model seems far more accurate and useful in this case.

Comment author: Wei_Dai 22 April 2012 07:46:27PM 2 points [-]

Merely that I am bold and that my behavioral tendencies and strategies in this kind of area are just signals of that boldness? Dave's model seems far more accurate and useful in this case.

I find that my brain doesn't automatically build detailed models of LW participants, even the most prominent ones like yourself, and I haven't found a strong reason to do so consciously, using explicit reasoning, except when I engage in discussion with someone, and even then I only try to model the part of their mind most relevant to the discussion at hand.

I realize that I may be engaging in typical mind fallacy in thinking that most other people are probably like me in this regard. If I am, I'd be curious to find out.

Comment author: TheOtherDave 22 April 2012 05:38:46AM 0 points [-]

Fair enough; it may be that I overestimate the value of what I'm calling trust here.

Just for my own clarity, when you say that what I'm doing is signaling my lack of timidity, are you referring to my actual behavior on this site, or are you referring to the behavior we've been discussing on this thread (or are they equivalent)?

I'm not especially looking to make real-life friends, though there are folks here who I wouldn't mind getting to know in real life. Ditto work contacts. I have no interest in working for SI.

Comment author: Wei_Dai 22 April 2012 09:38:25AM 0 points [-]

I was talking about the abstract behavior that we were discussing.

Comment author: wedrifid 21 April 2012 05:46:48AM *  1 point [-]

If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam's ability to engage in A...
...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.
...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance.
...if on reflection I reject A and I can't come to agreement with Sam, I endorse acknowledging that I've unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that's beside the point here.)

I really like your illustration here. To the extent that this is what you were trying to convey by "3)" in your analysis of wedrifid's style then I endorse it. I wouldn't have used the "alliances" description since that could be interpreted in a far more specific and less desirable way (like how Wei is framing it). But now that you have unpacked your thinking here I'm happy with it as a simple model.

Note that depending on the context there are times where I would approve of various combinations of support or opposition to each of "Sam", "Pat" and "A". In particular there are many behaviors "A" that the execution of will immediately place the victim of said behavior into the role of "ally that I am obliged to support".

Comment author: TheOtherDave 21 April 2012 03:03:47PM *  2 points [-]

Yeah, agreed about the distracting phrasing. I find it's a useful way for me to think about it, as it brings into sharp relief the associated obligations for mutual support, which I otherwise tend to obfuscate, but talking about it that way tends to evoke social resistance.

Agreed that there are many other scenarios in addition to the three I cite, and the specifics vary; transient alliances in a multi-agent system can get complicated.

Also, if you have an articulable model of how you make those judgments I'd be interested, especially if it uses more socially acceptable language than mine does.

Edit: Also, I'm really curious as to the reasoning of whoever downvoted that. I commit to preserving that person's anonymity if they PM me about their reasoning.

Comment author: wedrifid 21 April 2012 05:29:58PM 0 points [-]

I'm really curious as to the reasoning of whoever downvoted that.

For what it is worth, sampling over time suggests multiple people - at one point there were multiple upvotes.

I'm somewhat less curious. I just assumed it people from the 'green' social alliance acting to oppose the suggestion that people acting out the obligations of social allegiance is a desirable and necessary mechanism by which a community preserves that which is desired and prevents chaos.

Comment author: MixedNuts 20 April 2012 07:29:28PM 8 points [-]

Might be too advanced for someone who just learned that saying "Please stop being stupid." is a bad idea.

Comment author: TheOtherDave 20 April 2012 07:42:42PM 4 points [-]

Sure. Then again, if you'd only intended that for chaosmosis' benefit, I assume you'd have PMed it.

Comment author: wedrifid 21 April 2012 12:24:39AM 0 points [-]

Might be too advanced for someone who just learned that saying "Please stop being stupid." is a bad idea.

Well... I've seen people nearly that exact phrase to great effect at times... But that's not the sort of thing you'd want to include in a 'basics' list either.

Just as with fashion, it is best to follow the rules until you understand the rules well enough to know exactly how they work and why a particular exception applies!

Comment author: komponisto 22 April 2012 08:36:58PM 5 points [-]

wedrifid, who is somehow capable of pleasant interaction while being a complete jerk

Regardless of whether or not this is compatible with being a "complete jerk" in your sense, I wish to point out that wedrifid is in many respects an exemplary Less Wrong commenter. There are few others I can think of who are simultaneously as (1) informative, including about their own brain state, (2) rational, especially in the sense of being willing and able to disagree within factions/alliances and agree across them, and (3) socially clueful, in the sense of being aware of the unspoken interpersonal implications of all discourse and putting in the necessary work to manage these implications in a way compatible with one's other goals (naturally the methods used are community-specific but that is more than good enough).

In saying this, I don't know whether I'm expanding on your point or disagreeing with it.

Comment author: Wei_Dai 24 April 2012 05:50:04AM 3 points [-]

I would be interested in having wedrifid write a post systematically explaining his philosophy of how to participate on LW, because the bits and pieces of it that I've seen so far (your comment, TheOtherDave's, this comment by wedrifid) are not really forming into a coherent whole for me.

Comment author: wedrifid 24 April 2012 06:45:45AM 3 points [-]

I would be interested in having wedrifid write a post systematically explaining his philosophy of how to participate on LW, because the bits and pieces of it that I've seen so far (your comment, TheOtherDave's, this comment by wedrifid) are not really forming into a coherent whole for me.

That would be an interesting thing to do, too. It is on the list of posts that I may or may not get around to writing!

Comment author: wedrifid 22 April 2012 08:51:56PM *  4 points [-]

Regardless of whether or not this is compatible with being a "complete jerk" in your sense, I wish to point out that wedrifid is in many respects an exemplary Less Wrong commenter. There are few others I can think of who are simultaneously as (1) informative, including about their own brain state, (2) rational, especially in the sense of being willing and able to disagree within factions/alliances and agree across them, and (3) socially clueful, in the sense of being aware of the unspoken interpersonal implications of all discourse and putting in the necessary work to manage these implications in a way compatible with one's other goals (naturally the methods used are community-specific but that is more than good enough).

I appreciate your kind words komponisto! You inspire me to live up to them.

Comment author: [deleted] 18 April 2012 05:33:06PM 7 points [-]

It's not really a rationality problem, but I need to learn how to deal with other people who have big egos.

This is actually a really worthwhile skill to learn, independently of any LW-related foolishness. And it is actually a rationality problem.

Comment author: [deleted] 18 April 2012 07:54:07PM *  2 points [-]

And it is actually a rationality problem.

You mean to the extent that any problem at all is a rationality problem, or something else?

Comment author: [deleted] 18 April 2012 10:28:32PM 2 points [-]

It's a bias, as far as I'm concerned, and something that needs to be overcome. People with egos can be right, but if one can't deal with the fact that they're either right or wrong regardless of their egotism, then one is that much slower to update.

Comment author: David_Gerard 18 April 2012 11:24:18PM *  0 points [-]

Dealing with others' irrationality is very much a rationality problem.

Comment author: thomblake 18 April 2012 06:06:15PM *  8 points [-]

Plus, I like the idea of losing so much karma in one day and then eventually earning it all back

This discussion is off-topic for the "Rationality Quotes" thread, but...

If you're interested in an easy way to gain karma, you might want to try an experimental method I've been kicking around:

Take an article from Wikipedia on a bias that we don't have an article about yet. Wikipedia has a list of cognitive biases. Write a top-level post about that bias, with appropriate use of references. Write it in a similar style to Eliezer's more straightforward posts on a bias, examples first.

My prediction is that such an article, if well-written, should gain about +40 votes; about +80 if it contains useful actionable material.

Comment author: chaosmosis 18 April 2012 06:18:30PM *  1 point [-]

No, I want this to be harder than that. It needs to be a drawn out and painful and embarrassing process.

Maybe I'll eventually write something like that. Not yet.

Comment author: DSimon 18 April 2012 10:52:19PM *  10 points [-]

It needs to be a drawn out and painful and embarrassing process.

Oh, you want a Quest, not a goal. :-)

In that case, try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.

Note: I believe that it is not only possible, but even easy, for you to do this and get a net karma gain. All you need is (a) a fairly good argument, and (b) a friendly tone.

Comment author: orthonormal 22 April 2012 06:48:41PM 5 points [-]

Try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.

I nominate this as the Less Wrong Summer Challenge, for everybody.

(One modification I'd make: it shouldn't necessarily be the exact opposite: precisely reversed intelligence usually is stupidity. But your thesis should be mutually incompatible with any charitable interpretation of the original claim.)

Comment author: wedrifid 18 April 2012 11:33:41PM 0 points [-]

In that case, try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.

That actually sounds fun now that you put it like that!

Comment author: gRR 22 April 2012 07:16:39PM 1 point [-]

And now I realize I just did exactly that, and your prediction is absolutely correct. No bonus points for me, though.

Comment author: Bugmaster 18 April 2012 10:54:46PM 1 point [-]

You just need a reasonably friendly tone. I have a bunch of karma, and I haven't posted any articles yet (though I'm working on it).

Comment author: DSimon 18 April 2012 10:56:15PM 2 points [-]

Indeed, that would work if karma was merely the goal. But chaosmosis expressed a desire for a "painful and embarrasing process", meaning that the ante and risk must be higher.

Comment author: David_Gerard 18 April 2012 11:23:28PM 5 points [-]

One day I will write "How to karmawhore with LessWrong comments" if I can work out how to do it in such a way that it won't get -5000 within an hour.

Comment author: DSimon 18 April 2012 11:38:44PM *  16 points [-]

I know how you could do it. You need to come up with a detailed written strategy for maximizing karma with minimal actual contribution. Have some third party (or several) that LW would trust hold on to it in secrect.

Then, for a week or two, apply that strategy as directly and blatantly as you think you can get away with, racking up as many points as possible.

Once that's done, compile a list of those comments and post it into an article, along with your original strategy document and the verification from the third party that you wrote the strategy before you wrote the comments, rather than ad-hocing a "strategy" onto a run of comments that happened to succeed.

Voila: you have now pulled a karma hack and then afterwards gone white-hat with the exploit data. LW will have no choice but to give you more karma for kindly revealing the vulnerability in their system! Excellent. >:-)

Comment author: Dias 19 April 2012 07:58:43AM 5 points [-]

Have some third party (or several) that LW would trust hold on to it in secrect.

Nitpick: cryptography solves this much more neatly.

Of course, people could accuse you of having an efficient way of factorising numbers, but if you do karma is going to be the least of anyone's concerns.

Comment author: ciphergoth 19 April 2012 12:31:03PM 4 points [-]

Factorization doesn't enter into it - to precommit to a message that you will later reveal publically, publish a hash of the (salted) message.

Comment author: wedrifid 19 April 2012 08:29:12AM *  1 point [-]

Nitpick: cryptography solves this much more neatly.

But somewhat less transparently. The cryptographic solution still requires that an encrypted message is made public prior to the actions being taken and declaring an encrypted prediction has side effects. The neat solution is to still use trusted parties but give the trusted parties only the encrypted strategy (or a hash thereof).

Comment author: Bugmaster 19 April 2012 09:25:50AM 0 points [-]

The cryptographic solution still requires that an encrypted message is made public prior to the actions being taken and declaring an encrypted prediction has side effects.

What kind of side effects ? I have no formal training in cryptography, so please forgive me if this is a naive question.

Comment author: wedrifid 19 April 2012 09:32:11AM 2 points [-]

What kind of side effects ? I have no formal training in cryptography, so please forgive me if this is a naive question.

I mean you still have to give the encrypted data to someone. They can't tell what it is but they can see you are up to something. So you still have to use some additional sort of trust mechanism if you don't want the act of giving encrypted fore-notice to influence behavior.

Comment author: Bugmaster 19 April 2012 05:27:59PM *  1 point [-]

Ah ok, that makes sense. In this case, you can employ steganography. For example, you could publish an unrelated article using a pretty image as a header. When the time comes, you reveal the algorithm and password required in order to extract your secret message from the image.

Comment author: David_Gerard 18 April 2012 11:41:45PM 3 points [-]

My actual strategy was just to post lots. Going through the sequences provided a target-rich environment ;-)

Comment author: TheOtherDave 19 April 2012 12:18:18AM 5 points [-]

IME, per-comment EV is way higher in the HP:MoR discussion threads.

Comment author: David_Gerard 19 April 2012 07:03:12AM 2 points [-]

It so is. Karmawhoring in those is easy.

This suggests measuring posts for comment EV.

Comment author: Hul-Gil 19 April 2012 07:20:26AM *  3 points [-]

This suggests measuring posts for comment EV.

Now that is an interesting concept. I like where this subthread is going.

Interesting comparisons to other systems involving currency come to mind.

EV-analysis is the more intellectually interesting proposition, but it has me thinking. Next up: black-market karma services. I will facilitate karma-parties... for a nominal (karma) fee, of course. If you want to maintain the pretense of legitimacy, we will need to do some karma-laundering, ensuring that your posts appear that they could be worth the amount of karma they have received. Sock-puppet accounts to provide awful arguments that you can quickly demolish? Karma mines. And then, we begin to sell LW karma for Bitcoins, and--

...okay, perhaps some sleep is in order first.

Comment author: David_Gerard 19 April 2012 02:25:03PM 1 point [-]

And then, we begin to sell LW karma for Bitcoins, and--

It is clear we need to start work on a distributed, decentralised, cryptographically-secure Internet karma mechanism.

Comment author: [deleted] 19 April 2012 05:07:55PM 1 point [-]

You need to come up with a detailed written strategy for maximizing karma with minimal actual contribution.

Create a dozen sockpuppet accounts and use them to upvote every single one of your posts. Duh.

Comment author: RichardKennaway 22 April 2012 07:15:27PM 5 points [-]

That's like getting a black belt in karate by buying one from the martial arts shop. It isn't karmawhoring unless you're getting karma from real people who really thought your comments worth upvoting.

Comment author: [deleted] 23 April 2012 06:55:47PM 1 point [-]

“Getting karma from real people who really thought your comments worth upvoting” sounds like a good thing, so why the (apparently) derogatory term karmawhoring?

Comment author: RichardKennaway 23 April 2012 07:14:54PM *  5 points [-]

It is good to have one's comments favourably appreciated by real people. Chasing after that appreciation, not so much. Especially, per an ancestor comment, trying to achieve that proxy measure of value while minimizing the actual value of what you are posting. The analogy with prostitution is close, although one difference is that the prostitute's reward -- money -- is of some actual use.

Comment author: Strange7 21 April 2012 07:25:11AM 5 points [-]

Not as straightforward as it sounds. Irrelevant one-sentence comments upvoted to +10 will attract more downvotes than they would otherwise.

Comment author: Bugmaster 19 April 2012 05:29:21PM 1 point [-]

This would indeed count as "minimal contribution", but still sounds like a lot of work...

Comment author: wedrifid 18 April 2012 07:10:23PM 2 points [-]

It's not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across.

It is what we would call an "instrumental rationality" problem. And one of the most important ones at that. Right up there with learning how to deal with our own big egos... which you seem to be taking steps towards now!