Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Right. I think this is one of the key issues. When things like 'natural', 'random' (both in where, when, and how often they happen) or are otherwise uncontrollable, humans are much keener to accept them. When agency comes into play, it changes the perspective on it completely: "how could we have changed culture/society/national policies/our surveillance system/educational system/messaging/nudges/pick your favorite human-controllable variable" to have prevented this, or prevent it in the future? It's the very idea that we could influence it and/or that it's perpetuated by 'one of us' that makes it so salient and disturbing. From a consequentialist perspective, it's definitely not rational, and we shouldn't (ideally) affect our allocation of resources to combat threats.

Is there a particular bias that covers "caring about something more, however irrelevant/not dangerous, just because a perceived intelligent agent was responsible?"

One of the reasons this post is of interest is that it likely represents the feelings of some/many would-be rationalists and the struggles they have. The reasons this person has for continuing their current mode of living cuts across many different lines. How many people choose to not come out of the closet, don't admit to being childfree, or refuse to be the sexual libertines they wish they could be because of fear of potentially being ostracized (and losing their social and economic support networks)? Thought experiment:

In a theoretical future society where the following conditions are true:

  • New people are "grown" or simply do not know their parents. A highly advanced AI raises everyone. This means that there are no familial attachment. All attachments are to others who you voluntarily enter into relationships with (friends, sexual partners, mentors, whatever.) Modern analogue would "raised by the state" (and not necessarily in underfunded orphanages.)

  • The link between work and survival has been completely severed. Robots do all the work, and all the basics are provided. You can work if you want to, but it's not required, and you're in no greater danger of starving, being homeless, being involved in a violent situation, etc. if you don't. This means the economic reasons for maintaining links to others are also severed. Modern equivalent could be generous welfare states with universal job systems.

  • Finding people who you feel you'd want to associate with has become trivial. A system exists that can very quickly find others who share you interests, and due to sophisticated "intent" reading technology (meaning that it's impossible to lie or deceive said system) there's no question that those you are connected with are honest about their intentions for wanting to associate with you. No modern equivalent.

To sum up the above, it's a society of free associations, no economic dependence, and total transparency with regards to interpersonal connections.

In this society, how many people would be afraid to be rationalists (or irreligious, childfree, libertines, take your pick)? What does the data say about societies which tend more in these directions than the US? Here's one interesting datapoint: http://t.co/E2WEIxR

Bottom line for this comment: I would speculate that the ability to be an open rationalist is likely heavily influenced by which society you live in, though obviously some real data would be helpful here. Using both educational attainment and level of religiosity as a proxy for open rationalism, are countries which score high on those ranks more accepting of open rationality? Top fits would be places like the Czech Republic, Finland, Sweden, and maybe Germany. It would be interesting to know.

My question is:

Why is guilt often so bad a de-motivator? There are people who know they will feel guilty before the fact, do in fact feel guilt after the fact, and perhaps even continue to live with that guilt every day, but still continue the behavior. Guilt seems like it evolved exactly for the purpose of preventing people from acting in a way they consider immoral or unethical, so why does it so often seem so bad at its job?

I don't believe I've conflated anything. It's posed as a question because I don't know the answer; I'm giving my view and some speculation based on a nagging feeling/set of thoughts. I'm looking for the views and experiences of others who may have observed/felt something similar.

spencerth230

Though I agree with you strongly, I think we should throw the easy objection to this out there: high-quality, thorough scholarship takes a lot of time. Even for people who are dedicated to self-improvement, knowledge and truth-seeking (which I speculate this community has many of), for some subjects, getting to the "state of the art"/minimum level of knowledge required to speak intelligently, avoid "solved problems", and not run into "already well refuted ideas" is a very expensive process. So much so that some might argue that communities like this wouldn't even exist (or would be even smaller than they are) if we all attempted to get to that minimum level in the voluminous, ever-growing list of subjects that one could know about.

This is a roundabout way of saying that our knowledge-consumption abilities are far too slow. We can and should attempt to be widely, broadly read knowledge-generalists and stand on the shoulders of giants; climbing even one, though, can take a dauntingly long time.

We need Matrix-style insta-learning. Badly.

The bigger issue to me is the value system that makes this phenomenon exist in the first place. It essentially requires people to care more about signaling than seeking truth. Of course this makes sense for many (perhaps most) people since signaling can get you all sorts of other things you want, whereas finding the truth could happen in a vacuum/near vacuum (you could find out some fundamental truth and then die immediately, forget about it, tell it to people and have no one believe you, etc.)

It bothers me that extremely narrow self-interest (as indicated by "fun to argue") is so much more important to so many than truth seeking. Would it be so /wrong/ to seek truth, and THEN signal once you think you've found it (even if you're actually incorrect) than just taking up a contrary position for its own inherent "argumentative pleasure" value?

It seems intellectually lazy. Perhaps that's part of its appeal.

Good post. One other thing that should be said has to do with the /why/. Why do we design many games like this? There are some obvious reasons: it's easier, it's fun, it plays on our natural reward mechanisms, etc. A perhaps less obvious one: it reflects the world as many /wish it could be/. Straightforward; full of definite, predefined goals; having well known, well understood challenges; having predictable rewards that are trivial to compare to others; having a very linear path for "progression" (via leveling up, attribute increases, etc.) A world with a WHOLE lot less variables.

Very good article. One thing I'd like to see covered are conditions that are "treatable" with good lifestyle choices, but whose burden is so onerous that no one would consider them acceptable. Let's say you have a genetic condition which causes you to gain much more weight (5x, 10x - the number is up to the reader) than a comparable non-affected person. So much that the only way you can prevent yourself from becoming obese is to strenuously exercise 8 hours a day. If a person chooses not to do this, are they really making a "bad" choice? Is it still their fault? In this scenario, 1/3 of your day/life has become about treating this condition. I doubt too many people would honestly choose to do the "virtuous" thing in this situation.

Second thing I'd like covered: things that were inflicted on you without your consent. How much blame can you take for, let's say, your poor job prospects if your parents beat you severely every day (giving you slight brain damage of some kind, but not enough for it to be casually noticeable), fed you dog food and dirt sandwiches until you were 18, or forced you to live in an area where bullets flew into your room while you slept, forcing you to wake up in terror? There's plenty of evidence for the potentially devastating and permanent effects of trauma, poor childhood nutrition, and stress. Sure, some people manage to live like that and come out of it OK, but can everyone? Is it still right to hold someone so treated /morally/ responsible for doing poorly in their life?

If they raped you, starved you/fed you paint chips, beat you to the point of brain injury, tortured you? How about being born in a place where the pollution is so bad that you're likely to get sick/die from with a very high probability? Places that are completely ravaged with drought or famine? Places where genocide is fairly regular? Where your parents are so destitute that they are forced to feed you the absolute worst food (or even non-"food") so that your brain/body never develops properly?

Of course, for people/places where rape/forced childbirth is prevalent or the knowledge of how pregnancy occurs is still non-existent, it's understandable. For places where the former isn't and the latter is, there really should be no statute of limitations on blame.

The quote is good, but should be understood to apply only in certain contexts (i.e., to people who weren't born into horrific conditions and who live(d) in a place with something resemble equality of opportunity.) Not understanding this perpetuates the idea that "everything that happens to you is your own fault" that appears in some popular strains of political thought today, when it clearly cannot be universally applied.

I came up with quote for a closely related issue:

"Don't let the fact that idiots agree with you be the sole thing that makes you change your mind, else all you'll have gained is a different set of idiots who agree with you."

Naive people (particularly contrarians) put into a situation where they aren't sure which ideas are truly "in" or "out" or "popular" may become highly confused and find themselves switching sides frequently. After joining a "side", then being agreed with by people whose arguments were poor in support of something good, they find themselves making an argument like "Wow. So many idiots support this! There's no way this can be good." only to find out after switching sides again that the same thing keeps happening. Why? It's because there are likely complete fools who support every cause you might consider good.

Bottom line: consider arguments on their merits, and avoid automatically thinking that they're bad (or good) simply because of who believes them or the (bad) arguments made on behalf of the idea. That's difficult, but if you don't, you wind up with situations similar to Eliezer's.

Load More