Comment author: Z_M_Davis 05 June 2009 06:19:38AM 2 points [-]

Re violence, do see "Bayesans vs. Barbarians."

Comment author: StanR 05 June 2009 09:16:47AM *  1 point [-]

I hope to post on that post shortly, after giving it some thought.

Gris, just as bias against violence may be the reason it's hardly ever considered, alternatively, it may not only be a rational position, but a strategically sensible one. Please consider looking at the literature concerning strategic nonviolence. The substantial literature at the Albert Einstein Institute is good for understanding nonviolent strategy and tactics against regimes, and the insights provided translate into courses of action in other conflicts, as well.

Comment author: taw 04 June 2009 06:40:41AM 0 points [-]

A third possibility: Humans aren't in general capable of accurately reflecting on their preferences.

Three is pretty much like one. If utility functions work, there must be some way of figuring them out, I hoped someone figured it out already.

If utility functions are a bad match for human preferences, that would seem to imply that humans simply tend not to have very consistent preferences. What major premise does this invalidate?

Utilitarian model being wrong doesn't necessarily mean that a different model based on different assumptions doesn't exist. I don't know which assumptions need to be broken.

Comment author: StanR 04 June 2009 07:31:55AM *  2 points [-]

The general premise in the mind sciences is that there are different selves, somehow coordinated through the cortical midline structures. Plenty of different terms have been used, and hypotheses suggested, but the two "selves" I use for shorthand come from Daniel Gilbert: Socrates and the dog. Socrates is the narrative self, the dog is the experiencing self. If you want something a bit more technical, I suggest the lectures about well-being (lecture 3) here, and to get really technical, this paper on cognitive science exploring the self.

Comment author: Annoyance 31 May 2009 11:32:50PM -5 points [-]

It may come as a shock, but in my case, being rational is not my highest priority. I haven't actually come up with a proper wording for my highest priority yet, but one of my major goals in pursuing that priority is to facilitate a universal ability for people to pursue their own goals (with the normal caveats about not harming or overly interfering with other people, of course). One of the primary reasons I pursue rationality is to support that goal.

Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped "being rational" to the top of my to-do list.

Is it impossible to be an x-rationalist and still value people?

'People' do not lend themselves to any particular utility. The Master of the Way treats people as straw dogs.

Comment author: StanR 01 June 2009 02:08:47AM *  1 point [-]

Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped "being rational" to the top of my to-do list.

Voted down because your realization is flawed. Achieving anything does not require you to be rational, as evidenced by this post.

The Master of the Way treats people as straw dogs.

Your strategy of dealing with people is also flawed: does the Master of the Way always defect? If you were a skilled exploiter, you wouldn't give obvious signals that you are an exploiter. Instead, you seem to be signaling "Vote me off the island!" to society, and this community. You may want to reconsider that position.

Comment author: jimrandomh 31 May 2009 11:23:39PM 0 points [-]

I feel I should jump in here, as you appear to be talking past each other. There is no confusion in the system 1/system 2 distinction; you're both using the same definition, but the bit about decoys and shields was actually the core of PJ's post, and of the difference between your positions. PJ holds that to change someone's mind you must focus on their S1 response, because if they engage S2, it will just rationalize and confabulate to defend whatever position their S1 holds. Now, I have no idea how one would go about altering the S1 response of someone who didn't want their response altered, but I do know that many people respond very badly to rational arguments that go against their intuition, increasing their own irrationality as much as necessary to avoid admitting their mistake.

Comment author: StanR 01 June 2009 01:39:44AM *  3 points [-]

I don't believe we are, because I know of no evidence of the following:

evolutionarily speaking, a big function of system 2 is to function as a decoy/shield mechanism for keeping ideas out of a person. And increasing a person's skill at system 2 reasoning just increases their resistance to ideas.

Perhaps one or both of us misunderstands the model. Here is a better description of the two.

Originally, I was making a case that attempting to reason was the wrong strategy. Given your interpretation, it looks like pjeby didn't understand I was suggesting that, and then suggested essentially the same thing.

My experience, across various believers (Christian, Jehovah's Witness, New Age woo-de-doo) is that system 2 is never engaged on the defensive, and the sort of rationalization we're talking about never uses it. Instead, they construct and explain rationalizations that are narratives. I claim this largely because I observed how "disruptable" they were during explanations--not very.

How to approach changing belief: avoid resistance by avoiding the issue and finding something at the periphery of belief. Assist in developing rational thinking where the person has no resistance, and empower them. Strategically, them admitting their mistake is not the goal. It's not even in the same ballpark. The goal is rational empowerment.

Part of the problem, which I know has been mentioned here before, is unfamiliarity with fallacies and what they imply. When we recognize fallacies, most of the time it's intuitive. We recognize a pattern likely to be a fallacy, and respond. We've built up that skill in our toolbox, but it's still intuitive, like a chess master who can walk by a board and say "white mates in three."

Comment author: pjeby 31 May 2009 04:15:31PM 0 points [-]

It's not just nontrivial, it's incredibly hard. Engaging "system 2" reasoning takes a lot of effort, lowering sensitivity to, and acute awareness of, social cues and signals.

Engaging system 2 is precisely what you don't want to do, since evolutionarily speaking, a big function of system 2 is to function as a decoy/shield mechanism for keeping ideas out of a person. And increasing a person's skill at system 2 reasoning just increases their resistance to ideas.

To actually change attitudes and beliefs requires the engagement of system 1. Otherwise, even if you convince someone that something is logical, they'll stick with their emotional belief and just avoid you so they don't have to deal with the cognitive dissonance.

(Note that this principle also applies to changing your own beliefs and attitudes - it's not your logical mind that needs convincing. See Eliezer's story about overcoming a fear of lurking serial killers for an example of mapping System 2 thinking to System 1 thinking to change an emotional-level belief.)

Comment author: StanR 31 May 2009 10:43:27PM *  1 point [-]

pjeby, sorry I wasn't clear, I should have given some context. I am referencing system 1 and 2 as simplified categories of thinking as used by cognitive science, particularly in behavioral economics. Here's Daniel Kahneman discussing them. I'm not sure what you're referring to with decoys and shields, which I'll just leave at that.

To add to my quoted statement, workarounds are incredibly hard, and focusing on reasoning (system 2) about an issue or belief leaves few cycles for receiving and sending social cues and signals. While reasoning, we can pick up those cues and signals, but they'll break our concentration, so we tend to ignore them while reasoning carefully. The automatic, intuitive processing of the face interferes with the reasoning task; e.g. we usually look somewhere else when reasoning during a conversation. To execute a workaround strategy, however, we need to be attuned to the other person.

When I refer to belief, I'm not referring to fear of the dark or serial killers, or phobias. Those tend to be conditioned responses--the person knows the belief is irrational--and they can be treated easily enough with systematic desensitization and a little CBT thrown in for good measure. Calling them beliefs isn't wrong, but since the person usually knows they're irrational, they're outside my intended scope of discussion: beliefs that are perceived by the believer to be rational.

People are automatically resistant to being asked to question their beliefs. Usually it's perceived as unfair, if not an actual attack on them as a person: those beliefs are associated with their identity, which they won't abandon outright. We shouldn't expect them to. It's unrealistic.

What should we do, then? Play at the periphery of belief. To reformulate the interaction as a parable: We'll always lose if we act like the wind, trying to blow the cloak off the traveller. If we act like the sun, the traveller might remove his cloak on his own. I'll think about putting a post together on this.

Comment author: AdeleneDawner 29 May 2009 08:15:41PM 0 points [-]

I was going to say "there are more workarounds than you think", but that's probably my selection bias talking again. That said, there are workarounds, in some situations. It's still not a trivial thing to learn, though.

Comment author: StanR 31 May 2009 09:27:51AM *  3 points [-]

It's not just nontrivial, it's incredibly hard. Engaging "system 2" reasoning takes a lot of effort, lowering sensitivity to, and acute awareness of, social cues and signals.

The mindset of "let's analyze arguments to find weaknesses," aka Annoynance's "rational paths," is a completely different ballgame than most people are willing to play. Rationalists may opt for that game, but they can't win, and may be reinforcing illogical behavior. Such a rationalist is focused on whether arguments about a particular topic are valid and sound, not the other person's rational development. If the topic is a belief, attempting to reason it out with the person is counterproductive. Making no ground when engaging with people on a topic should be a red flag: "maybe I'm doing the wrong thing."

Does anyone care enough for me to make a post about workarounds? Maybe we can collaborate somehow Adelene, I have a little experience in this area.

Comment author: stcredzero 29 May 2009 04:31:33PM 0 points [-]

I suspect that meetup.com's continued existence is fueled by single people trying to actually meet interesting others.

Comment author: StanR 31 May 2009 07:32:24AM *  6 points [-]

I was part of a meetup on "alternative energy" (to see if actual engineers went to the things--I didn't want to date a solar cell) when I got an all-group email from the group founder about an "event" concerning The Secret* and a great opportunity to make money. Turned out it was a "green" multi-level marketing scam he was deep in, and they were combining it with the The Secret. Being naive, at first I said I didn't think the event was appropriate, assuming it might lead to some discussion. He immediately slandered me to the group, but I managed to send out an email detailing his connections to the scam before I was banned from the group. I did get a thank you from one of the members, at least.

I looked through meetup and found many others connected to him. Their basic routine involves paying the meetup group startup cost, having a few semi-legit meetings, and then using their meetup group as a captive audience.

I admit, I was surprised. I know it's not big news, but the new social web has plenty of new social scammers, and they're running interference. It's hard to get a strong, clear message out when opportunists know how to capitalize on easy money: people wanting to feel and signal like they're doing something. I honestly don't think seasteading can even touch that audience, but then again, I'm not sure you'd want to.

Comment author: Eliezer_Yudkowsky 15 April 2009 02:34:55PM 2 points [-]
Comment author: StanR 17 April 2009 04:00:55AM -3 points [-]

Having had to explain to other sci-fi lovers in the past why using fiction as a counterargument is so silly, I googled to see if people had written about why it's silly. GUESS WHAT I FOUND?

The Logical Fallacy of Generalization from Fictional Evidence, by Eliezer Yudkowsky: http://www.overcomingbias.com/2007/10/fictional-evide.html

Yeah, my jaw dropped when I found that. I'm sure you won't respond to this, as the mass of LW moves on to the most current post, but really? Was this a self-aware joke? Eliezer 2009 is that much less rational than Eliezer 2007?

Comment author: PhilGoetz 15 April 2009 02:20:12PM *  0 points [-]

Depending on how polarized the sides are, the audience is either mostly, or completely going there to watch a fight and root for their team.

Like I said, entertainment.

If they just want to have pride in being right, and rallying their base, they shouldn't keep up a pretense of spreading rational atheism. They want both, but they can't have it.

Theory 2 proposes a way that rallying the base spreads atheism.

Comment author: StanR 16 April 2009 04:14:32AM *  1 point [-]

And I said rational atheism, not atheism.

Granted, I didn't express my thoughts on that clearly. I think there is a fundamental difference between attempting to get someone to agree with your opinions and helping them develop rationality--likely through both similar and different opinions. I think the latter is a better moral play, and it's genuine.

What is the higher priority result for a rational atheist targeting a theist: - a more rational theist - a not-any-more-rational-than-before atheist - an irrational agnostic

I think the biggest "win" of the results is the first. But groups claiming to favor rationality most still get stuck on opinions all the time (cryogenics comes to mind). Groups tend to congregate around opinions, and forcing that opinion becomes the group agenda, even though they believe it's the right and rational opinion to have. It's hard, because a group that shares few opinions is hardly a group at all, but the sharing of opinions and exclusion of those with different opinions works against an agenda of rationality. And shouldn't that be the agenda of a rational atheist?

I think the "people who don't care" of your 1st theory are either 1) unimportant or 2) don't exist, depending on the meaning of the phrase.

I think theory 2 makes a fatal mistake, it emphasizes the cart (opinion) rather than the horse (rational thinking). I'm willing to grant they're not so separate and cut-and-dry, but I wanted to illustrate the distinction I see.

Comment author: prase 15 April 2009 03:53:27PM 3 points [-]

It is far from clear that the "colour revolutions" resulted in more democracy in respective countries. See e.g. http://en.wikipedia.org/wiki/Saakashvili#Criticism

To be an ally of the West is not the same as to be a democratic country. Similarly, the elections are not automatically rigged if communists win.

Comment author: StanR 16 April 2009 03:12:31AM 1 point [-]

So, according to Freedom House, countries with nonviolent revolutions since the late 1990s are improving. There's not a lot of data beforehand. You named the exception: Georgia's gotten a little worse since the overthrow of the "rigged" election there. Look at the data: http://www.freedomhouse.org/template.cfm?page=42&year=2008

I'm willing to admit I might have some Western bias, but I try to catch it. The general consensus does seem to be that the elections were rigged, but I don't know enough to say with much confidence either way.

In my original post, I was referring to the period of actual revolution, not everything since. I know it's not all sunshine and rainbows. Reality is gritty, nasty stuff. Nonviolent struggle strategy and tactics do not guarantee success nor democracy--but neither do violent methods.

If we're discussing strategies and tactics, most nonviolent movements do not plan much past overthrow. That's bad, but again, no worse than violent overthrow.

These are big and fuzzy concepts, for sure. When does a revolution actually end? If a less or equally undemocratic leader is elected, is that a failure of nonviolent struggle, a failure or planning, a failure of the people, or what? Are Freedom House's metrics valid or consistent? I don't have good answers.

If you were to wager on whether strategic nonviolent or strategic violent struggles in the modern day were more likely to lead toward a successful overthrow, how would you bet? What about leading toward more democratic overthrows (i.e. elections)?

View more: Next