Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
StanR10

I hope to post on that post shortly, after giving it some thought.

Gris, just as bias against violence may be the reason it's hardly ever considered, alternatively, it may not only be a rational position, but a strategically sensible one. Please consider looking at the literature concerning strategic nonviolence. The substantial literature at the Albert Einstein Institute is good for understanding nonviolent strategy and tactics against regimes, and the insights provided translate into courses of action in other conflicts, as well.

StanR20

The general premise in the mind sciences is that there are different selves, somehow coordinated through the cortical midline structures. Plenty of different terms have been used, and hypotheses suggested, but the two "selves" I use for shorthand come from Daniel Gilbert: Socrates and the dog. Socrates is the narrative self, the dog is the experiencing self. If you want something a bit more technical, I suggest the lectures about well-being (lecture 3) here, and to get really technical, this paper on cognitive science exploring the self.

StanR10

Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped "being rational" to the top of my to-do list.

Voted down because your realization is flawed. Achieving anything does not require you to be rational, as evidenced by this post.

The Master of the Way treats people as straw dogs.

Your strategy of dealing with people is also flawed: does the Master of the Way always defect? If you were a skilled exploiter, you wouldn't give obvious signals that you are an exploiter. Instead, you seem to be signaling "Vote me off the island!" to society, and this community. You may want to reconsider that position.

StanR30

I don't believe we are, because I know of no evidence of the following:

evolutionarily speaking, a big function of system 2 is to function as a decoy/shield mechanism for keeping ideas out of a person. And increasing a person's skill at system 2 reasoning just increases their resistance to ideas.

Perhaps one or both of us misunderstands the model. Here is a better description of the two.

Originally, I was making a case that attempting to reason was the wrong strategy. Given your interpretation, it looks like pjeby didn't understand I was suggesting that, and then suggested essentially the same thing.

My experience, across various believers (Christian, Jehovah's Witness, New Age woo-de-doo) is that system 2 is never engaged on the defensive, and the sort of rationalization we're talking about never uses it. Instead, they construct and explain rationalizations that are narratives. I claim this largely because I observed how "disruptable" they were during explanations--not very.

How to approach changing belief: avoid resistance by avoiding the issue and finding something at the periphery of belief. Assist in developing rational thinking where the person has no resistance, and empower them. Strategically, them admitting their mistake is not the goal. It's not even in the same ballpark. The goal is rational empowerment.

Part of the problem, which I know has been mentioned here before, is unfamiliarity with fallacies and what they imply. When we recognize fallacies, most of the time it's intuitive. We recognize a pattern likely to be a fallacy, and respond. We've built up that skill in our toolbox, but it's still intuitive, like a chess master who can walk by a board and say "white mates in three."

StanR10

pjeby, sorry I wasn't clear, I should have given some context. I am referencing system 1 and 2 as simplified categories of thinking as used by cognitive science, particularly in behavioral economics. Here's Daniel Kahneman discussing them. I'm not sure what you're referring to with decoys and shields, which I'll just leave at that.

To add to my quoted statement, workarounds are incredibly hard, and focusing on reasoning (system 2) about an issue or belief leaves few cycles for receiving and sending social cues and signals. While reasoning, we can pick up those cues and signals, but they'll break our concentration, so we tend to ignore them while reasoning carefully. The automatic, intuitive processing of the face interferes with the reasoning task; e.g. we usually look somewhere else when reasoning during a conversation. To execute a workaround strategy, however, we need to be attuned to the other person.

When I refer to belief, I'm not referring to fear of the dark or serial killers, or phobias. Those tend to be conditioned responses--the person knows the belief is irrational--and they can be treated easily enough with systematic desensitization and a little CBT thrown in for good measure. Calling them beliefs isn't wrong, but since the person usually knows they're irrational, they're outside my intended scope of discussion: beliefs that are perceived by the believer to be rational.

People are automatically resistant to being asked to question their beliefs. Usually it's perceived as unfair, if not an actual attack on them as a person: those beliefs are associated with their identity, which they won't abandon outright. We shouldn't expect them to. It's unrealistic.

What should we do, then? Play at the periphery of belief. To reformulate the interaction as a parable: We'll always lose if we act like the wind, trying to blow the cloak off the traveller. If we act like the sun, the traveller might remove his cloak on his own. I'll think about putting a post together on this.

StanR30

It's not just nontrivial, it's incredibly hard. Engaging "system 2" reasoning takes a lot of effort, lowering sensitivity to, and acute awareness of, social cues and signals.

The mindset of "let's analyze arguments to find weaknesses," aka Annoynance's "rational paths," is a completely different ballgame than most people are willing to play. Rationalists may opt for that game, but they can't win, and may be reinforcing illogical behavior. Such a rationalist is focused on whether arguments about a particular topic are valid and sound, not the other person's rational development. If the topic is a belief, attempting to reason it out with the person is counterproductive. Making no ground when engaging with people on a topic should be a red flag: "maybe I'm doing the wrong thing."

Does anyone care enough for me to make a post about workarounds? Maybe we can collaborate somehow Adelene, I have a little experience in this area.

StanR70

I was part of a meetup on "alternative energy" (to see if actual engineers went to the things--I didn't want to date a solar cell) when I got an all-group email from the group founder about an "event" concerning The Secret* and a great opportunity to make money. Turned out it was a "green" multi-level marketing scam he was deep in, and they were combining it with the The Secret. Being naive, at first I said I didn't think the event was appropriate, assuming it might lead to some discussion. He immediately slandered me to the group, but I managed to send out an email detailing his connections to the scam before I was banned from the group. I did get a thank you from one of the members, at least.

I looked through meetup and found many others connected to him. Their basic routine involves paying the meetup group startup cost, having a few semi-legit meetings, and then using their meetup group as a captive audience.

I admit, I was surprised. I know it's not big news, but the new social web has plenty of new social scammers, and they're running interference. It's hard to get a strong, clear message out when opportunists know how to capitalize on easy money: people wanting to feel and signal like they're doing something. I honestly don't think seasteading can even touch that audience, but then again, I'm not sure you'd want to.

StanR10

And I said rational atheism, not atheism.

Granted, I didn't express my thoughts on that clearly. I think there is a fundamental difference between attempting to get someone to agree with your opinions and helping them develop rationality--likely through both similar and different opinions. I think the latter is a better moral play, and it's genuine.

What is the higher priority result for a rational atheist targeting a theist:

  • a more rational theist
  • a not-any-more-rational-than-before atheist
  • an irrational agnostic

I think the biggest "win" of the results is the first. But groups claiming to favor rationality most still get stuck on opinions all the time (cryogenics comes to mind). Groups tend to congregate around opinions, and forcing that opinion becomes the group agenda, even though they believe it's the right and rational opinion to have. It's hard, because a group that shares few opinions is hardly a group at all, but the sharing of opinions and exclusion of those with different opinions works against an agenda of rationality. And shouldn't that be the agenda of a rational atheist?

I think the "people who don't care" of your 1st theory are either 1) unimportant or 2) don't exist, depending on the meaning of the phrase.

I think theory 2 makes a fatal mistake, it emphasizes the cart (opinion) rather than the horse (rational thinking). I'm willing to grant they're not so separate and cut-and-dry, but I wanted to illustrate the distinction I see.

StanR20

So, according to Freedom House, countries with nonviolent revolutions since the late 1990s are improving. There's not a lot of data beforehand. You named the exception: Georgia's gotten a little worse since the overthrow of the "rigged" election there. Look at the data: http://www.freedomhouse.org/template.cfm?page=42&year=2008

I'm willing to admit I might have some Western bias, but I try to catch it. The general consensus does seem to be that the elections were rigged, but I don't know enough to say with much confidence either way.

In my original post, I was referring to the period of actual revolution, not everything since. I know it's not all sunshine and rainbows. Reality is gritty, nasty stuff. Nonviolent struggle strategy and tactics do not guarantee success nor democracy--but neither do violent methods.

If we're discussing strategies and tactics, most nonviolent movements do not plan much past overthrow. That's bad, but again, no worse than violent overthrow.

These are big and fuzzy concepts, for sure. When does a revolution actually end? If a less or equally undemocratic leader is elected, is that a failure of nonviolent struggle, a failure or planning, a failure of the people, or what? Are Freedom House's metrics valid or consistent? I don't have good answers.

If you were to wager on whether strategic nonviolent or strategic violent struggles in the modern day were more likely to lead toward a successful overthrow, how would you bet? What about leading toward more democratic overthrows (i.e. elections)?

Load More