Comment author: TheOtherDave 26 October 2013 11:20:23PM 0 points [-]

Gotcha; I understand now. If that's actually a reliable method of analysis for you I'm impressed by your memory, but lacking the evidence of its reliability that you have access to I hop you'll forgive me if it doesn't significantly raise my confidence in the retroactive-karma-penalty theory.

Comment author: BaconServ 27 October 2013 02:37:41AM 0 points [-]

Certainly; I wouldn't expect it to.

Comment author: TheOtherDave 26 October 2013 10:47:41PM 1 point [-]

If you feel like responding, you can assume I mean by "bias" whatever you meant by it when you used the word.

Conversely, if you feel like turning this into an opportunity for me to learn to clear up my mental confusions and then demonstrate my learning to you, that's of course your call.

If I experience such an epiphany I may let you know whether your stance thereby becomes clearer to me.

Comment author: BaconServ 27 October 2013 02:20:38AM 0 points [-]

Hah. I like and appreciate the clarity of options here. I'll attempt to explain.

A lot about social situations is something we're directly told: "Elbows off the table. Close your mouth when you chew. Burping is rude, other will become offended." Others are more biologically inherent; murder isn't likely to make you popular a party. (At least not the positive kind of popularity...) What we're discussing here lies somewhere between these two borders. We'll consider aversion to murderers to be the least biased, having very little bias to it and being more a rational reaction, and we'll consider asserted matters of "manners" to be maximally biased, having next to nothing to do with rationality and everything to do with believing whatever you're told.

It's a fuzzy subject without fully understanding psychology, but for the most part these decisions about social interaction are made consciously. In the presence of a biased individual, for whatever reason and whatever cause, if you challenge them on their strong opinions you're liable to start an argument. There are productive arguments and unproductive arguments alike, but if the dinner table is terribly quiet already and an argument breaks out between some two members, everyone else has the option of "politely" letting the argument run its course, or intervening to stop this silly discussion that everyone's heard time and time again and are tired of hearing. Knowing all to well how these kind of things start, proceed, and stop, the most polite thing you can do to not disrupt the pleasant atmosphere that everyone is pleased with is simply not to indulge the argument. Find another time, another place. Do it in private. Do whatever. Just not now at the dinner table, while everyone's trying to have a peaceful meal.

There's an intense meme among rationalists that whenever two rational agents disagree, they must perform a grand battle. This is just not true. There are many many opportunities in human interaction to solve the same problem. What you find is that people never work up the courage to do it ever, because of how "awkward" it would be, or any other number of excuses. "What if s/he rejects me? I'll be devastated!" Intelligent agents are something to be afraid of, especially when their reactions control your feelings.

The courtesy isn't so much for the opiner as it is for everyone else present. It is a result of bias, but not on the part of the people signaling silence; they're just trying to keep things pleasant and peaceful for everyone.

Of course my description here could be wrong, but it's not. The easy way to determine this is to ask each person in turn why they chose to be silent. Pretty much all of them are going to recite some subset of my assessment. Some people may have acquired that manner from being instructed to hold that manner, while others derived it from experience. The former case can be identified by naive confusion, "Mommy, why didn't anyone tell him he was being racist?" You'll understand when you're older because people periodically fail to recognize the usefulness of civility. You'll see it eventually, possibly coming from the people who were surrounded by mannerly people to the degree that they never were able to acquire the experience that got everyone else to adopt that manner. Even if it makes sense rationally, it could be the result of bias, but it can be hard to convince a child of complex things like that, so the bias doesn't play a role beyond that that person finding that the things they were told as a child that they distinctly remember never understanding growing up did actually make sense in reality.

You can't fault the child for being ignorant, but you can fault them for not recognizing the truth of mother's words when the situation comes up that's supposed to show them why the wisdom was correct. If they don't learn it from experience like everyone else does, something went wrong. Possibly they overcompensated when they rejected Christianity and thought that it was a total fluke that their parents were competent enough to take care of a child. All those things that didn't have to do with Christianity? Nope. Biased by Christianity. Out the window they go, along with the bathwater. When grandma says something racist and everyone goes silent, that is not tacit approval, that is polite disapproval. To not recognize something so obvious is going to be the result of some manner of cognitive bias, whether it's a mindset of being the victim, white knighting on Tumblr's behalf, an extreme bias against Christianity, etc.. Whatever it is that makes you think your position that contradicts the wisdom handed down and independently verified by generation after generation of highly intelligent agents capable of abstract reasoning is something that contradicts rationality.

Our ancestors didn't derive quantum mechanics, no. That doesn't make them unintelligent by any stretch of the imagination. When it came to interacting with other intelligent agents, we had intense pressure on us to optimize, and we did. Only now are we formally recognizing the principles that underlie deep generational wisdom.

So to answer concisely:

Barring that "treating silence as a way of expressing that the opiner deserves courtesy" is the result of bias, but that the bias originates in the opiner, not the analyzer of the silence, if we're speaking strictly about the analysis of silence in modern social settings...

Do you in fact believe that?

Yes.

Can you provide any justification for believing it?

I can cite a pretty large chunk of the history of civilized humanity, yes.

The confusion is arising from your misunderstanding that decision theory is embedded more deeply in our psychology than our conscious mind—primitive decision theory (everything we've formally derived about decision theory up to this point) is embedded in our evolutionary psychology. There's a ton more nuance to human interaction than social justice's founding premise of, "Words hurt!!! (What are sticks and stones?)"

Comment author: TheOtherDave 26 October 2013 10:41:33PM 2 points [-]

I'm utterly unclear on what evidence you were searching for (and failing to find) to indicate a source of an immediate 15-point karma drop. For example, how did you exclude the possibility of 15 separate downvotes on 15 different comments? Did you remember the previous karma totals of all your comments?

Comment author: BaconServ 26 October 2013 10:59:51PM 0 points [-]

More or less, yeah. The totaled deltas weren't of the necessary magnitude order in my approximation. It's not that many pages if you set the relevant preference to 25 per page and have iterated all the way back a couple times before.

Comment author: TheOtherDave 26 October 2013 01:19:53PM 2 points [-]

It sounds like you believe that treating silence as a way of expressing that the opinion enjoys social support is the result of bias, but that treating silence as a way of expressing that the opiner deserves courtesy though the opinion is wrong is not the result of bias.

Do you in fact believe that?
If so, can you provide any justification for believing it? Because it seems implausible.

Comment author: BaconServ 26 October 2013 10:32:55PM 0 points [-]

I'd need an expansion on "bias" to discuss this with any useful accuracy. Is ignorance a state of "bias" in the presence of abundant information to the contrary of the naive reasoning from ignorance? Please let me know if my stance becomes clearer when you mentally disambiguate "bias."

Comment author: TheOtherDave 26 October 2013 12:59:37PM 3 points [-]

It wasn't retroactive when I did this test a while back. Natch, code changes over time, and I haven't tested recently.

Comment author: BaconServ 26 October 2013 10:22:48PM 0 points [-]

I iterated my entire comment history to find the source of an immediate -15 spike in karma; couldn't find anything. My main hypothesis was moderator reprimand until I put the pieces together on the cost of replying to downvoted comments. Further analysis today seems to confirm my suspicion. I'm unsure if the retroactive quality of it is immediate or on a timer but I don't see any reason it wouldn't be immediate. Feel free to test on me, I think the voting has stabilized.

Comment author: linkhyrule5 26 October 2013 08:57:39AM 1 point [-]

By definition, I can't really guarantee that information it gives you will be surprising. I can tell you that I consider myself a fairly luminous person and that RescueTime still managed to surprise me.

At any rate: I also don't think you're going to get better than RescueTime. It keeps track of everything, does it down to the second, and does it without you having to notice - and it certainly helped me, so there's one data point.

Comment author: BaconServ 26 October 2013 07:41:20PM 0 points [-]

I see. I'll have to look into it some time.

Comment author: gattsuru 26 October 2013 03:44:13AM 0 points [-]

I'd expect more loss than that if someone really wanted to disable you; systemic karma abuse would end up being either resulting in karma loss equal to either some multiple of your total post count, or a multiple of the number of posts displayed per user history page (by default, 10).

Comment author: BaconServ 26 October 2013 09:00:15AM -2 points [-]

Actually I think I found out the cause: Commenting on comments below the display threshold costs five karma. I believe this might actually be retroactive so that downvoting a comment below display the display threshold takes five karma from each user possessing a comment under it.

Comment author: linkhyrule5 26 October 2013 08:33:39AM 0 points [-]

I have basically this problem (I'm fairly sure it's tied into my ADHD). As noted below, RescueTime is an excellent solution: it'll keep track of different tabs in the same program separately, do it automatically, and do it down to the second - so you get a very accurate result at the end. I've found some very valuable results from this - for starters, that I work only about 50% of my "work hours" even on a good day, and that I spend much more time on random surfing than I thought I did.

Comment author: BaconServ 26 October 2013 08:45:34AM -1 points [-]

As a baseline, I need a program that will give me more information than simply being slightly more aware of my actions does. I want something that will give me surprising information I wouldn't have noticed otherwise. This is necessarily non-trivial, especially given my knack for metacognition.

Comment author: BaconServ 26 October 2013 08:35:08AM 1 point [-]

A habit I find my mind practicing incredibly often is simulation of the worst case scenario. Obviously the worst case scenario for any human interaction is that they will become murderously enraged and do everything in their power to destroy you. This is generally safe to dismiss as nonsense/completely paranoid. After numerous iterations of this, you start ignoring the unrealistic worst-possible scenarios (that often make so little sense there is nothing you can do to react to them) and get down to the realistic worst case scenario. Often times in my youth this meant thinking about the reaction to my saying exactly what I felt and thought. The reactions I predicted in response were inaccurate to the point of caricature, but I often found that, even in the wost case scenario that made half sense, there was still a path forward. It wasn't the end of the world or some irreversible horror that would scar me forever, it was just an event where emotions got heated. That's generally it. There's little way to create a lasting problem without planning to create such a thing.

Obviously this doesn't apply to supernatural actions on your part (creating strong AI is, in many ways, a supernatural scenario), but since those lie outside the realm of common logic, you have to handle them specially. Interestingly, when I was realistic about it, people didn't react too badly to when I thought about what would happen if I suddenly did some intensely supernatural event like telekinesis. Sure, it's surprising, and they'll want you to help them move, but there's nothing they can really do if you insist you want to keep it a secret. They pretty much have to respect your right to self-determination. Of course they could always go supervillain on you like in the comics, but that's not a terribly realistic worst-case scenario even if it were strictly possible.

Of course it sounds like meaningless fiction at that point, but it serves to illustrate just how bad the worst case scenario is; I've found it is very hard to pretend the worst case is immensely terrible when you think about it realistically.

Comment author: ahbwramc 26 October 2013 05:55:55AM 7 points [-]

Okay, let's see.

I've noticed a mental phenomenon I call crystallization. I'm sure other people have noticed it, and they might even have a similar name for it. It's basically where you encounter a new thought or idea that takes a bunch of vague, half-formed thoughts you had floating around in the back of your head, and crystallizes them - condenses them into one overarching, explicit idea. The explicitness is very important - pre-crystallized thoughts are not explicit. Crystallization can be almost an insidious process, in a way, in that you can wind up holding new ideas or beliefs, that you thought you held all along - you don't even notice yourself learning. In that sense it's related to hindsight bias - things seem obvious after you know them.

Random example: I always thought libertarianism held some appeal to me, but I couldn't put my finger on what exactly. Then I read Yvain's non-libertarian FAQ and came upon the following sentence:

"Unlike the mix-and-match philosophies of the Democratic and Republican parties, libertarianism is coherent and sometimes even derived from first principles."

Aha! That's it exactly. What attracted me to libertarianism was its simplicity and self-consistency. Makes sense. After reading that sentence it seems obvious. But was it obvious beforehand? Probably not - I had had vague, not-spelled-out thoughts along those lines, but I had never put it into words before. There exists a very clear difference between my thinking before and after reading that sentence, that I might not have even noticed if I didn't have this notion of crystallization.

I post this hoping to crystallize the idea of crystallization itself for people. I think a lot of people have - of course - vague, half-formed notions that something like this is true, but they haven't spelled it out explicitly - and I think explicitness in this case is very important.

Comment author: BaconServ 26 October 2013 08:16:25AM 1 point [-]

I've noticed that I crystallize discrete and effective sentences like that a lot in response to talking to others. Something about the unique way they need things phrased for them to understand well results in some compelling crystallized wisdoms that I simply would not have figured out nearly as precisely if I hadn't explained my thoughts to them.

View more: Prev | Next