You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Risto_Saarelma comments on Rationality, Transhumanism, and Mental Health - Less Wrong Discussion

8 Post author: ialdabaoth 14 October 2012 09:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: ialdabaoth 15 October 2012 06:06:18AM *  14 points [-]

I will use this very post to illustrate!

You just asked, "give three concrete examples from your life."

My first instinct is that this is a challenge, an attempt to set me up as unreliable and "whiny" in front of the pack.

According to this instinct, if I fail to respond to you, you will have "called me out" - and by failing to respond, I will lose face.

Also according to this instinct, if I DO respond to you, no matter how I do so, you will manage to turn it around in such a way that I will appear to be lying or deliberately miscommunicating my experience for the sake of sympathy - and will again lose face.

My natural response to this instinct is to attempt to describe these examples in the most self-deprecatory way possible, but I know that any attempt to do so will cause me to seem contemptuously weak - and I will again lose face.

As I continue to process this dilemma, I attempt to work out the actual probabilities that any given decision I make will lead to a given outcome. However, as I do so, something internally pegs my "lose face" utility to +ERR.OVERFLOW, and the error cascades all the way through my multiplications and completely poisons the [utility*probability] sort.

Eventually, I just say "fuck it" and come clean to you that I'm having trouble answering your question due to an error. My instinct tells me that, in so doing, you will turn this around on me and I will again lose face. I start processing how I can explain to you that I'm having trouble answering your question, building different strategies for explanation and weighing their probable utility payoffs, but then the bug pops up again (or another, similar one) and pegs one or two of the outcome utilities to +ERR.OVERFLOW or -ERR.OVERFLOW (or sometimes even ERR.DIV0), and the whole [utility*probability] sort gets poisoned again.

Am I making any sense?

I guess what I'm trying to say is, your question scares me, and I'm not sure if it's a legitimate query for information or an attempt to "trip me up" socially, and THAT RIGHT THERE is the problem itself.

So here's to honesty, or something.

Comment author: Risto_Saarelma 15 October 2012 12:21:42PM 4 points [-]

"Name three" is an LW site trope.

Comment author: ialdabaoth 15 October 2012 12:36:40PM 2 points [-]

And hence a pack-identification ritual, which I did not respond to correctly? And also a bona-fide request for information?

Shit, my recursion map just forked. N-dimensionally.

Comment author: handoflixue 15 October 2012 09:18:29PM 2 points [-]

a pack-identification ritual, which I did not respond to correctly?

Going out on a limb here: Yes, correct. I would have failed it too, and I've been here for a year. People here tend not to care if you fail their pack-identification rituals, and will actually get a bit annoyed if you start trying to optimize for that.

In other words, it's not important that it's a pack-identification ritual.

(Disclaimer: There are packs that care a lot about rituals. My general philosophy is to avoid all such packs, because I suck at such rituals. I like LessWrong because even when people downvote me and otherwise disapprove of me, I've never had the sensation that the pack is trying to ostracize me or punish me for failure-to-observe-pack-rituals)

Comment author: ialdabaoth 15 October 2012 10:41:57PM 1 point [-]

On this site, there are discussions about believing-in-belief, and how to purge it when you are merely "aping the belief" in something wrong, like religion.

I want to believe that there are packs that do not care about rituals, but I cannot formulate an actual belief that this is true; only a "belief-in-belief" that it is true.

How does one modify the process of purging "belief-in-belief"s that happen to actually correspond to reality? Because it seems that getting the right answer for the wrong reason is just as bad as being wrong.

Comment author: NancyLebovitz 03 November 2012 04:10:27PM 0 points [-]

What do you mean by "ritual"?

Comment author: falenas108 15 October 2012 04:20:18PM 1 point [-]

This is a bit of an usual case. In most contexts, "name 3" would be a kind of challenge. It just happens to be an actual request for information here.

Comment author: NancyLebovitz 16 October 2012 03:37:52AM 1 point [-]

I'd say you responded quite well by giving a detailed description of your mental processes. You've got 12 karma points for that reply.

Comment author: ChristianKl 15 October 2012 03:57:22PM 1 point [-]

Then how about taking it as a learning opportunity. There no reason why you can't update and still provide three examples.

Learning from mistakes is both normal social behavior and rational.