Comment author: Logos01 06 October 2011 10:00:40PM 3 points [-]

As for the cognitive load, why not state assumptions at the beginning of an essay where possible,

I just now caught this, and... this is, I believe, where we have our fundamental disconnect.

By restricting the dialogue to essays the overwhelming majority of the meaningfulness of what I'm trying to say is entirely eliminated: my statements have been aimed at discussing the heuristic of measuring the cognitive burden per "unit" of information when communicating. The fact is that in a pre-planned document of basically any type one can safely assume a vastly greater available "pool of cognition" in his audience than in, say, a one-off comment in response to it, a youtube video comment, or something said over beers on a Friday night with your drinking-buddies.

I am struck by the thought that this metaphorically very similar to how Newton's classical mechanics equations manifest themselves from quantum mechanics after you introduce enough systems, or how the general relativity equations become effectively conventional at "non-relativistic" speeds: when you change the terms of the equations the apparent behaviors become significantly different. Just like how there's no need to bother considering your own relativistic mass when deciding whether or not to go on a diet, the heuristic I'm trying to discuss is vanishingly irrelevant to anything that one should expect from a thought-out-in advance, unrestricted-in-length, document.

Comment author: GilPanama 09 October 2011 03:20:32AM 2 points [-]

Upvoted for clear communication.

I'm sort of puzzled, though, as to how I could have possibly interpreted your statements as applying to anything but the post and the comments on it; I saw no context clues suggesting that you meant "in everyday conversation." Did I miss these?

That said, if one of us had added just three or four words of proviso earlier, limiting our generalizations explicitly, we could have figured the disconnect out more quickly. I could have said that my generalizations apply best to essays and edited posts. You could have said that your generalizations apply best to situations where the added cost of qualifiers carries a higher burden.

Because we did not explicitly qualify our generalizations, but instead relied on context, we fell prey to a fake disagreement. However, any vindication I feel at seeing my point supported is nullified by the realization that I, personally, failed to apply the communication strategy that I was promoting.

Oops.

Comment author: Logos01 06 October 2011 07:57:07AM 2 points [-]

True but misleading. One should seek to avoid eliminating relevant meaning in the process of making those generalizations.

(Formatting tip: you need to add two spaces at the end of the previous line to get lesswrong's commenting markup language to "<br>"/"\n". Two newlines will "<p>".)

I follow the convention of thinking that provisos are somwhere betwee standard deviation or significant digits. When someone adds that proviso "asexual/homosexual" -- they are changing the relevant level of precision necessary to the conversation.

For example; if I say "Men and women get married because they love each other", then the fact that some men/women don't marry, or the fact that intersex people aren't necessarily men or women, or the fact that GLBT people who marry are also likely to do so because of love, or the fact that some marriages are loveless is only a distraction to the conversation at hand.

While this seems like a trivial item for a single statement, the thing about this is that such provisos propagate across all dependent statements, meaning that the informational value of all dependent statements is reduced by each such proviso made.

Consider the difference in meaning between "Men and women marry each other because they love each other" and "Men/women/intersex individuals and other men/women/intersex individuals may or may not marry one another in groups as small as two with no upper bound for reasons that can vary depending on the situation."

This is, granted, an extreme example (reductio absurdum) but I make it to demonstrate the value of keeping in mind your threshold of significance when making a statement. Sometimes, as counterintuitively as it may seem, less accurate statements are less misleading.

Comment author: GilPanama 06 October 2011 09:36:08AM 8 points [-]

When someone adds that proviso "asexual/homosexual" -- they are changing the relevant level of precision necessary to the conversation.

No, they are pointing out that in order to apply to a case they are interested in, the conversation must be made more precise.

For example; if I say "Men and women get married because they love each other", then the fact that some men/women don't marry, or the fact that intersex people aren't necessarily men or women, or the fact that GLBT people who marry are also likely to do so because of love, or the fact that some marriages are loveless is only a distraction to the conversation at hand.

The last one isn't a distraction, it's a counterexample. If you want to meaningfully say that men and women marry out of love, you must implicitly claim that loveless marriages are a small minority. If someone says, "A significant number of of marriages are loveless," they aren't trying to get you to add a trivializing proviso. They're saying that your generalization is false.

Consider the difference in meaning between "Men and women marry each other because they love each other" and "Men/women/intersex individuals and other men/women/intersex individuals may or may not marry one another in groups as small as two with no upper bound for reasons that can vary depending on the situation."

This isn't a reductio, it's a strawman. When you add provisos to a statement that is really nontrivial, you do not turn "generally" into "may or may not." You turn "always" into "generally", or "generally" into "in the majority of cases".

In any case, what about "People who marry generally do so out of love?" This retains the substance of the original statement while incorporating the provisos. All that is gained is real clarity. All that is lost is fake clarity. (And if enough people are found who marry for other reasons, it is false.)

Comment author: Logos01 06 October 2011 07:40:01AM *  1 point [-]

This isn't about charity, but clarity.

I in another subthread referenced the "Harry Potter and the Methods of Rationality" 'fanfic' written by Eliezer, when he mentioned how many fewer digits of Pi rational!Harry knew as compared to rational!Hermione.

The point is that I'm concerned not with charity nor with clarity, but rather with sufficiency to the current medium. Each of those little "costs next to nothing" statements actually do have a cost, one that isn't necessarily clear initially.

Are you familiar at all with how errors propagate in measurements? Each time you introduce new provisos, those statements affect the "informational value" of each dependent statement in its nest. This creates an analogous situation to the concept of significant digits in discourse.

For a topic like lukeprog's, in other words, the difference between 99% and 80% of women is below the threshold of significance. Eliminating it altogether (until such time as it becomes significant) is an important and valuable practice in communication.

Failure to effectively exercise that practice will result in needless 'clarifications' distracting from the intended message, hampering dialogs with unnecessary cognitive burden resultant from additional nesting of "informational quanta." In other words; if you add too many provisos to a statement, an otherwise meaningful and useful one will become trivially useless. An example of this in action can be found in another subthread of this conversation where someone stated he felt that there is a 'trend among frequent LessWrongers to over-generalize". This has informational meaning. He later added a 'clarification' that he hadn't intended the statement as an indication of population size, which totally reversed the informational value of his statement from an interesting one to a statement so utterly trivial that it is effectively without meaning or usefulness.

Comment author: GilPanama 06 October 2011 09:07:14AM 2 points [-]

Each of those little "costs next to nothing" statements actually do have a cost, one that isn't necessarily clear initially.

The cost of omitting them isn't clear initially, either.

Are you familiar at all with how errors propagate in measurements? Each time you introduce new provisos, those statements affect the "informational value" of each dependent statement in its nest. This creates an analogous situation to the concept of significant digits in discourse.

I was generally taught to carry significant figures further than strictly necessary to avoid introducing rounding errors. If my final answer would have 3 significant digits, using a few buffer digits seemed wise. They're cheap.

Propagation of uncertainty is not a reason to drop qualifiers. It's a reason to use them. When reading an argument based on a generalization, I want to know the exceptions BEFORE the argument begins, not afterwards. That way, I can have a sense of how the uncertainties in each step affect the final conclusion.

For a topic like lukeprog's, in other words, the difference between 99% and 80% of women is below the threshold of significance. Eliminating it altogether (until such time as it becomes significant) is an important and valuable practice in communication.

If I want an answer to three significant figures, I do not begin my reasoning by rounding to two sigfigs, then trying to add in the last sigfig later.

If one person thinks that an argument depends on an assumption that fails in 1 in 100 cases, and someone else thinks the assumption fails in 1 in 5 cases, and they don't even know that they disagree, and pointing out this disagreement is regarded as some kind of map-territory error, they will have trouble even noticing when the disagreement has become significant.

Failure to effectively exercise that practice will result in needless 'clarifications' distracting from the intended message, hampering dialogs with unnecessary cognitive burden resultant from additional nesting of "informational quanta." In other words; if you add too many provisos to a statement, an otherwise meaningful and useful one will become trivially useless.

This tends to happen to bad generalizations, yes. Once you consider all of the cases in which they are wrong, suddenly they seem to only be true in the trivial cases!

Good generalizations are still useful even after you have noted places where they are less likely to hold. Adding any number of true provisos will not make them trivial.

As for the cognitive load, why not state assumptions at the beginning of an essay where possible, rather than adding them to each individual statement? If the reader shares the assumptions, they'll just nod and move on. If the reader does NOT share the assumptions, then relieving them of the cognitive burden of being aware of disagreement is not a service.

In response to Rationality Drugs
Comment author: Iabalka 02 October 2011 08:25:05AM *  1 point [-]

What about improving rationality with neurofeedback? The theory is that if you can see some kind of representation of your own brain activity (EEG for example), you should be able to learn to modify it. It has been shown that people could learn to control pain by watching the activity of their pain centres (http://www.newscientist.com/article/mg18224451.400-controlling-pain-by-watching-your-brain.html). Neurofeedback is also used to treat ADHD, increase concentration, and "it has been shown that it can improve medical students' memory and make them feel calmer before exams."

In response to comment by Iabalka on Rationality Drugs
Comment author: GilPanama 06 October 2011 07:22:53AM 5 points [-]

I did quite a bit of EEG neurofeedback at the age of about 11 or 12. I may have learned to concentrate a little better, but I'm really not sure. The problem is that once I was off the machine, I stopped getting the feedback!

Consider the following interior monologue:

"Am I relaxing or focusing in the right way? I don't have the beeping to tell me, how do I know I am doing it right?"

In theory, EEG is a truly rational way to learn to relax, because one constantly gets information about how relaxed one is and can adjust one's behavior to maximize relaxation. In practice, I'm not sure if telling 12-year-old me that I was going to have access to electrical feedback from my own brain was the best way to relax me.

The EEG did convince me that physicalism was probably true, which distressed me because I had a lot of cached thoughts about how it is bad to be a soulless machine. My mother, who believed in souls at the time, reassured me that if I really was a machine that could feel and think, there'd be nothing wrong with that.

I wonder how my rationality would have developed if, at that point, she had instead decided to argue against the evidence?

Comment author: Logos01 04 October 2011 05:07:38PM 3 points [-]

And when dealing with matters of personal identity, not all explanations for the small worth of the set of exceptional people are as charitable as a supposedly small size of the set.

Certainly.

However, the simple truth is that communication becomes positively impossible if 'sweeping generalizations' at some level are not made. Is this a trade-off? Sure. But I for one do not find it exceedingly difficult to treat all broad-category generalizations as simulacra representing the whole body. Just like how there's probably not a single person in politics who agrees with the entirety of the DNC or the GOP's platforms, discussing those platforms is still relevant for a reason.

And political identity is arguably one of the most flame-susceptible category of that available for discourse nowadays. So that's saying something significant here.

Comment author: GilPanama 06 October 2011 04:53:01AM *  6 points [-]

A statement like "Women want {thing}" leaves it unclear what the map is even supposed to be, barring clear context cues. This can lead to either fake disagreements or fake agreements.

Fake disagreements ("You said that Republicans are against gun control, but I know some who aren't!") are not too dangerous, I think. X makes the generalization, Y points out the exception, X says that it was a broad generalization, Y asks for more clarity in the future, X says Y was not being sufficiently charitable, and so on. Annoying to watch, but not likely to generate bad ideas.

Fake agreements can lead to deeper confusion. If X seriously believes that 99% of women have some property, and Y believes that only 80% of women have some property, then they may both agree with the generalization even if they have completely different ideas about what a charitable reading would be!

It costs next to nothing to say "With very few exceptions, women...", "A strong majority of women...." or "Most women...." The three statements mean different things, and establishing the meaning does not make communication next-to-impossible; it makes communication clearer. This isn't about charity, but clarity.

Comment author: nshepperd 25 September 2011 04:06:09PM 2 points [-]

I wonder if it's a coincidence that it's currently late at night and I find myself agreeing with both those readings. "Deontological ethics is morally wrong" sounds about accurate.

Comment author: GilPanama 26 September 2011 07:55:55AM *  1 point [-]

The fact that it sounds accurate is what makes it a funny category error, rather than a boring category error. "2 + 2 = 3 is morally wrong" is not funny. "Deontological ethics is morally wrong" is funny.

It calls to mind a scenario of a consequentialist saying: "True, Deontologist Dan rescued that family from a fire, which was definitely a good thing... but he did it on the basis of an morally wrong system of ethics."

That''s how I reacted to it, anyway. It's been a day, I've had more sleep, and I STILL find the idea funny. Every time I seriously try to embrace consequentialist ethics, it's because I think that deontological ethics depend on self-deception.

And lying is bad.


EDIT: I am in no way implying that other consequentialists arrive at consequentialism by this reasoning. I am simply noting that the idea that consequentialist principles are better and more rational, so we should be rational consequentialists (regardless of the results), is very attractive to my own mental hardware, and also very funny.

Comment author: wedrifid 25 September 2011 05:31:21PM 0 points [-]

My late-night reading: "A deontological theory of ethics is not actually right. It is wrong. Morally wrong."

I am not sure what caused me to read it this way, but it cracked me up.

Cracked you up? Rather than just seeming like a straightforward implication of conflicting moral systems?

Comment author: GilPanama 26 September 2011 07:55:29AM 0 points [-]

Cracked you up? Rather than just seeming like a straightforward implication of conflicting moral systems?

I think it is not a straightforward implication at all. Maybe this rephrasing would make the joke clearer:

"A deontological theory of ethics is not actually right. It is morally wrong, in principle."

If that doesn't help:

"It is morally wrong to make decisions for deontological reasons."

What makes it funny is that moment wherein the reader (or at least, this reader) briefly agrees with it before the punchline hits.

Comment author: GilPanama 25 September 2011 12:03:23PM *  2 points [-]

"But what's actually right probably doesn't include a component of making oneself stupid with regard to the actual circumstances in order to prevent other parts of one's mind from hijacking the decision.


What you probably meant: "Rational minds should have a rational theory of ethics; this leads to better consequences."

My late-night reading: "A deontological theory of ethics is not actually right. It is wrong. Morally wrong."

I am not sure what caused me to read it this way, but it cracked me up.

Comment author: orthonormal 31 August 2011 02:00:59PM 7 points [-]

Well, that's the thing: some people do. Even obvious things can require some explanation.

Comment author: GilPanama 25 September 2011 11:03:04AM *  2 points [-]

This doesn't strike me as an inherently bad objection. Even the post offers the caveat that we're running on corrupt hardware. One can't say that consequentialist theories are WRONG on such grounds, but one can certainly object to the likely consequences of combining ambiguous expected values with brains that do not naturally multiply and are good at imagining fictional futures.

I think the argument can be cut down to this:

  1. In theory, we should act to create the best state of affairs.
  2. People are bad at doing that without predefined moral rules.
  3. Can we at least believe that we believe in those rules?

This is lousy truth-seeking, but may be excellent instrumental rationality if enough people are poor consequentialists and decent enough deontologists. It's not my argument of choice; step 3 seems suspiciously oily.

But then again, "That which can be destroyed by the truth, should be" has kind of a deontological ring to it...

Comment author: GilPanama 25 April 2011 05:54:33AM *  24 points [-]

How to step outside the rational box without going off the deep end. Essentially, techniques for maintaining a lifeline back to normality so you can explore the further reaches of the psyche in some degree of safety.

I developed some of these!

I had a manic episode as well, but it was induced by medication and led to hypersocial behavior. I quickly noticed that I was having bizarre and sudden convictions, and started adopting heuristics to deal with them. I thought I was normal, or even better than normal. Then I realized that such a thought was very abnormal for me, and compensated.

Mania, for me, was like thinking in ALL CAPS ABOUT THINGS I USUALLY IGNORED. It was suddenly giving credence to religion not because I ceased to be an atheist, but because WE ARE ALL CONNECTED REALLY! It was fuzzy thinking, but damned if it didn't make people like me more for a bit. It was looking people IN THE EYE, BECAUSE THAT IS WHAT TRUST AND SOCIAL COMMUNICATION IS ALL ABOUT, all the time, when I am normally shy of eye contact.

(If you find the CAPSLOCK intrusions in the above paragraph annoying, imagine THINKING THIS WAY and you begin to see why mania is a very tiring thing and NOT RECOMMENDED unless you REALLY KNOW WHAT YOU'RE DEALING WITH.)

Compensation strategies:

  • Another person in the mental ward, who had lived with mania for a longer time, taught me that breathing exercises can help. Stretch arms upward; inhale. Slowly lower them, exhaling. Repeat as needed.

  • I realized that because I was now trusting people (read: believing everything I heard), I was susceptible to getting extremely paranoid. This is not as contradictory as it sounds. After all, if you trust people who don't trust their doctors, you will trust in their paranoia. I therefore told myself, repeatedly, to trust my doctors. Over and over. This self-brainwashing was a good move in hindsight. Chaining myself to the mast of somebody else's sane clinical judgment protected me and insured that I left the mental ward quickly.

  • I tended to think that I should try to "help people." Mania amplified that hero complex. I therefore repeated a mantra to myself, over and over, with manic fervor: People help themselves. People help themselves. You don't help people. People help themselves.

  • I was encouraged by a visiting parent to take notes of ideas, so I could pursue them later. Result: Lots of notes that I later sorted out into "reasonable" and "not worth pursuing." This was helpful. Nothing permanently insightful, but some decent ideas.

  • Another mantra: Even brilliant ideas are wrong 99% of the time. No matter how good your idea is, it is probably wrong. You are probably wrong. You are probably wrong. Under normal circumstances, this isn't a great mantra. During mania, it is essential.

If anybody ever questions my credentials as a rationalist, I think I can safely say that I tried very hard to be a traditional rationalist with an eye for biases even when I was technically not in my right mind.

View more: Prev | Next