Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by tel on Remaining human
Comment author: Broggly 07 June 2011 03:34:24AM 0 points [-]

Tosca sounds like it has some strange theology. Surely most people who believe in Hell also believe in Absolution?

In response to comment by Broggly on Remaining human
Comment author: tel 15 June 2011 02:17:02AM 0 points [-]

Murder, suicide, and Catholicism don't mix. It's supposed to be an challenging opera for a culture that truly believes in the religious moral compass. You empathize with Tosca and her decisions to damn herself. The guy she kills is rather evil.

In response to comment by tel on Remaining human
Comment author: jhuffman 31 May 2011 08:28:24PM 3 points [-]

Well if rationality were traded on an exchange the irrational expectations for it probably did peak during the enlightenment, but I don't know what that really means to us now. The value reason has brought us is still accumulating, and with that reason's power to produce value is also accumulating.

In response to comment by jhuffman on Remaining human
Comment author: tel 31 May 2011 08:42:18PM 1 point [-]

I'm not sure I follow your first notion, but I don't doubt that rationality is still marginally profitable. I suppose you could couch my concerns as whether there is a critical point in rationality profit: at some point does become more rational cause more loss in our value system than gain? If so, do we toss out rationality or do we toss out our values?

And if it's the latter, how do you continue to interact with those who didn't follow in your footsteps? Create a (self defeating) religion?

In response to comment by tel on Remaining human
Comment author: [deleted] 31 May 2011 07:15:30PM 4 points [-]

I believe that in general, being able to make decisions that lead to the best consequences requires being able to imagine consequences of decisions, which requires being able to imagine counterfactuals well. If you want to be able to evaluate whether a claim is true or false, you have to be able to imagine a world in which the claim is true, and another in which the claim is false.

As a result, although it's irrational to believe in eternal damnation, a rational mind should certainly be able to empathize with someone afraid of eternal damnation. If a religious (or otherwise irrational) work of art is good, it would be irrational not to appreciate that. I think the reason you may see the opposite effect would be atheists who are afraid of admitting they felt moved by a religious work of art because it feels like an enemy argument.

In response to comment by [deleted] on Remaining human
Comment author: tel 31 May 2011 07:51:33PM *  2 points [-]

That's close, but the object of concern isn't religious artwork but instead states of mind that are highly irrational but still compelling. Many (most?) people do a great deal of reasoning with their emotions, but rationality (justifiably) demonizes it.

Can you truly say you can communicate well with someone who is contemplating suicide and eternal damnation versus the guilt of killing the man responsible for the death of your significant other? It's probably a situation that a rationalist would avoid and definitely a state of mind far different from one a rationalist would take.

So how do you communicate with a person who empathizes with it and relates those conundrums to personal tragedies? I feel rather incapable of communicating with a deeply religious person because we simply appreciate (rightfully or wrongfully) completely different aspects of the things we talk about. Even when we agree on something actionable, our conceptions of that action are non-overlapping. (As a disclaimer, I lost contact with a significant other in this way. It's painful, and motivating of some of the thoughts here, but I don't think it's influencing my judgement such that it's much different than my beliefs before her.)

In particular, the entire situation is not so different from Eliezer's Three Worlds Collide narrative if you want to tie it to LW canon material. Value systems can in part define admissible methods of cognition and that can manifest itself as inability to communicate.

What were the solutions suggested? Annihilation, utility function smoothing, rebellion and excommunication?

In response to Remaining human
Comment author: ata 31 May 2011 05:09:14PM 5 points [-]

Can you taboo "rational[ity]" and explain exactly what useful skills or mindsets you worry would be associated with decreased empathy or humaneness?

In response to comment by ata on Remaining human
Comment author: tel 31 May 2011 05:58:18PM 2 points [-]

A loss of empathy with "regular people". My friend, for instance, loves the opera Tosca where the ultimate plight and trial comes down to the lead soprano, Tosca, committing suicide despite certain damnation.

The rational mind (of the temperature often suggested here) might have a difficult time mirroring that sort of conundrum, however it's been used to talk about and explore the topics of depression and sacrifice for just over a century now.

So if you take part of your job to be an educator of those still under the compulsion of strange mythology, you probably will have a hard time communicating with them if you absolve all connection to that mythology.

In response to Remaining human
Comment author: Alicorn 31 May 2011 05:05:54PM 16 points [-]

Who wants to be human? Humans suck. Let's be something else.

In response to comment by Alicorn on Remaining human
Comment author: tel 31 May 2011 05:53:47PM 1 point [-]

I agree! That's at least part of why my concern is pedagogical. Unless your plan is more of just run for the stars and kill everyone who didn't come along.

Remaining human

0 tel 31 May 2011 04:42PM

If our morality is complex and directly tied to what's human—if we're seeking to avoid building paperclip maximizers—how do you judge and quantify the danger in training yourself to become more rational if it should drift from being more human?


My friend is a skeptical theist. She, for instance, scoffs mightily at Camping's little dilemma/psychosis but then argues from a position of comfort that Rapture it's a silly thing to predict because it's clearly stated that no one will know the day. And then she gives me a confused look because the psychological dissonance is clear.

On one hand, my friend is in a prime position to take forward steps to self-examination and holding rational belief systems. On the other hand, she's an opera singer whose passion and profession require her to be able to empathize with and explore highly irrational human experiences. Since rationality is the art of winning, nobody can deny that the option that lets you have your cake and eat it too is best, but how do you navigate such a narrows?


In another example, a recent comment thread suggested the dangers of embracing human tendencies: catharsis might lead to promoting further emotional intensity. At the same time, catharsis is a well appreciated human communication strategy with roots in Greek stage. If rational action pulls you away from humanity, away from our complex morality, then how do we judge it worth doing?

The most immediate resolution to this conundrum appears to me to be that human morality has no consistency constraint: we can want to be powerful and able to win while also want to retain our human tendencies which directly impinge on that goal. Is there a theory of metamorality which allows you to infer how such tradeoffs should be managed? Or is human morality, as a program, flawed with inconsistencies that lead to inescapable cognitive dissonance and dehumanization? If you interpret morality as a self-supporting strange loop, is it possible to have unresolvable, drifting interpretations based on how you focus you attentions?


Dual to the problem of resolving a way forward is the problem of the interpreter. If there is a goal to at least marginally increase the rationality of humanity, but in order to discover the means to do so you have to become less capable of empathizing with and communicating with humanity, who acts as an interpreter between the two divergent mindsets?

Comment author: Zetetic 13 May 2011 11:38:34PM 0 points [-]

highly complex unobservable mechanisms, large number of potential causes and covariates, sensible multiple groupings of observations, etc

Hmm, I might be totally off base here, but wouldn't that sort of thing be useful for reasoning about highly powerful optimization processes that would be driven to maximize their expected utility by figuring out what actions would decrease the entropy of a desirable portion of state space by working from massive amounts of input data? Maybe I should check it out either way.

Comment author: tel 31 May 2011 04:54:10AM 2 points [-]

I'm sorry, as I'm reading it that sounds rather vague. Gelman's work stems largely from the fact that there is no central theory of political action. Group behavior is some kind of sum of individual behaviors, but with only aggregate measurements you cannot discern the individual causes. This leads to a tendency to never see zero effect sizes, for instance.

Comment author: tel 24 April 2011 05:37:49PM 1 point [-]

I think this is an important direction to push discourse on Rationality toward. I wanted to write a spiritually similar post myself.

The theory is that we know our minds are fundamentally local optimizers. Within the hypothesis space we are capable of considering, we are extremely good exploitive maximizers, but, as always, it's difficult to know how much to err on the side of explorative optimization.

I think you can couch creativity and revolution in terms like that, and if our final goal is to find something to optimize and then do it, it's important to note randomized techniques might be a necessary component.

In response to comment by tel on Simpson's Paradox
Comment author: Davidmanheim 20 January 2011 04:04:46AM 0 points [-]

Clearly one could split a data set using basically any possible variable, but most are obviously wrong. (That is to say, they lack explanatory power, and are actually irrelevant.) To attempt to simplify, then, if you understand a system, or have a good hypothesis, it is frequently easier to pick variables that should be important, and gather further data to confirm.

Comment author: tel 24 January 2011 03:05:18AM *  2 points [-]

This is made explicit in removing connections from the graph. The more "obviously" "wrong" connections you sever, the more powerful the graph becomes. This is potentially harmful, though, since like assigning 0 probability weight to some outcome, once you sever a connection you lose the machinery to reason about it. If your "obvious" belief proves incorrect, you've backed yourself into a room with no escape. Therefore, test your assumptions.

This is actually a huge component of Pearl's methods since his belief is that the very mechanism of adding causal reasoning to probability is to include "counterfactual" statements that encode causation into these graphs. Without counterfactuals, you're sunk. With them, you have a whole new set of concerns but are also made more powerful.

It's also really, really important to dispute that "one could split a data set using basically any possible variable". While this is true in principle, Pearl made/confirmed some great discoveries by his causal networks which helped to show that certain sets of conditioning variables will, when selected together, actively mislead you. Moreover, without using counterfactual information encoded in a causal graph, you cannot discover which variables these are.

Finally, I'd just like to suggest that picking a good hypothesis, coming to understand a system; these are undoubtedly the hardest part of knowledge involving creativity, risk, and some of the most developed probabilistic arguments. Actually making comparisons between competing hypotheses such that you can end up with a good model and know what "should be important" is the tough part fraught with possibility of failure.

Comment author: Tiiba 17 January 2011 06:54:52PM 0 points [-]

You guys do what works for you, and I'll do what works for me. Maybe I just don't have the patience. Or maybe you don't have something required to understand lossily compressed info. Or both. I just know that books take all day long and help as much as short online tutorials. And the tutorials are often free.

Comment author: tel 18 January 2011 04:11:53PM 0 points [-]

If lecture notes contain as much relevant information as a book, then you should be able to, given a set of notes, write a terse but comprehensible textbook. If you're genuinely able to get that much out of notes, then yes that definitely works for you.

The concern is instead if reading a textbook only conveys a sparse, unconvincing, and context-free set of notes (which is my general impression of most lecture notes I've seen).

Both depend heavily on the quality of notes, textbook, subject, and the learning style you use, but I think it's a lot of people's experience that lecture notes alone convey only a cursory understanding of a topic. Practically enough sometimes, test-taking enough surely, but never too many steps toward mastery.

View more: Next