Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Dabor00

Being rational does not mean thinking without emotion. Being rational and being logical are different things. It may be more rational for you to go along with something you believe to be incorrect if it fits with your values.

For example, in an argument with a friend it may be logical to stick to what you know is true but rationally you may just concede a point of argument and help them even if you think it is not in their best interest. It all depends on your values and following your values is part of the definition of instrumental rationality.

If you're taking suggestions for corrections, I'd like to make one.

Your definition of "logical" here seems to be "ideal in a world of perfectly rational agents." I think that, for most people, 'logical' would already be a loaded term that's somewhat overlapped with the term 'rational', and allowing people to trade saying "That's rational but not right" for "That's logical but not right" may not be ideal (or logical, in your parlance).

Dabor50

I've been going to the gym more or less consistently (barring illness and Hurricane Sandy) the past year and a half, and it's never been an explicit goal. At the start, it was just "something that I should really start doing", and since then it's just been "something that I do." I've been gradually learning a third language with much the same mindset in much the same time frame.

While I'm satisfied with my progress on either front, it feels anti-climatic, and I don't strongly view myself as a person who "sets goals and successfully pursues them."

I tend to be effective at precommitting to small but willpower-demanding tasks, and despite tending to add escape clauses for myself (so I can delay or abandon the task in case of an emergency) I've not yet needed to use one.

Overall, there seems to be a factor of risk/reward in regard to one's self-image. If you decide "I'm going to eat healthier and exercise so I can lose some weight" or "I'm going to lose 10 pounds by the end of the month" and find yourself losing 9 pounds either way, it seems that the only way remaining to judge the different methods is by how well they'd contribute to a success spiral.

In other words, every goal-setting method has two significant elements: how effectively it'll get you to accomplish the immediate task at hand, and how well it'll contribute to your overall ability to set quality goals. To some extent it'll just depend on your personality, and sometimes you'll need to risk some long-term confidence because you really really need to get something done. However, in a good amount of cases, setting a goal that keeps up the likelihood of you being comfortable setting useful goals can be as important as what the goal is meant to immediately get done.

I've not given this conclusion too much thought (it came to me as I was writing), so feel free to point out if it's obviously wrong (or just trivially true).

Dabor10

Given that you want to improve your rationality to begin with, though, is believing that your moral worth depends on it really beneficial?

I'm not sure if you're asking my moral worth of myself or others, so I'll answer both.

If you're referring to my moral worth of myself, I'm assuming that the problem would be that, as I learn about biases, I would consider myself less of an agent, so I wouldn't be motivated to discover my mistakes. You'll have to take my word for it that I pat myself on the back whenever I discover an error in thinking and mark it down, but other than that, I don't have an issue with my self-image being (significantly, long term) tied to how I estimate my efficacy at rationality, one way or another. I just enjoy the process.

If you're referring to how I value others, then rationality seems inextricably tied to how I think of others. As I learn about how people get to certain views or actions, I consider them either more or less justified in doing so, and more or less "valuable" than others, if I may speak so bluntly of my fellow man. If I don't think there's a good reason to vandalize someone's property, and I think that there is a good reason to offer food to a homeless man, then if given that isolated knowledge, and a choice from Omega on who I wish to save (assuming that I can't save both), I'll save the person who commits more justified actions. Learning about difficult to lose biases that can lead one to do "bad things" or about misguided notions that can cause people to do right for the wrong reason inevitably changes how I view others (however incrementally), even if I don't offer them agency and see them as "merely" complex machines.

Do you actually value us and temporarily convince yourself otherwise, or is it the other way around?

Considering that I know that saying I value others is the ideal, and that if I don't believe so, I'd prefer to, it would be difficult to honestly say that I don't value others. I'm not an empathetic person and don't tend to find myself worrying about the future of humanity, but I try to think as if I do for the purpose of moral questions.

Seeing as that I value valuing you, and am, from the outside, largely indistinguishable from somebody who values you, I think I can safely say that I do value others.

But, I didn't quite have the confidence to answer that flatly.

Dabor20

In that sense, I don't know if modelling different people differently is, for me, a morally a right or a wrong thing to do. However, I spoke to someone whose default is not to assign people moral value, unless he models them as agents. I can see this being problematic, since it's a high standard.

From the main post.

Dabor60

I was quoting. It would be more accurate to say that "Would this be done exclusively by idiots?", what with reversed stupidity. Alternatively, if the answer to the default version is yes, that just suggests that you require further consideration. Either way, it's pretty tautological "Would only smart people do this? If not, am I doing it for a smart reason?" but having an extra layer of flags for thinking doesn't hurt.

Dabor20

I've gone through a change much like this over the past couple of years, although not with explicit effort. I would tend to get easily annoyed by crossing inconsequential stupidity or spite somewhere on the internet (not directed at me), and then proceed to be disappointed in myself for having something like that hang on my thoughts for a few hours.

Switching to a model in which I'm responsible for my own reaction to other people does a wonder for self control and saves some needless frustration.

I can only think of one person (that I know personally) whom I treat as possessing as much agency as I expect of myself, and that results in offering and expecting full honesty. If I view somebody as at all agenty, I generally wouldn't try to spare their feelings or in any way emotionally manipulate them for my own benefit. I don't find that to be a sustainable way to act with strangers: I can't take the time to model why somebody flinging a poorly written insult over a meaningless topic that I happened to skim over is doing so, and I'd gain nothing (and very probably be wrong) in assuming they have a good reason.

As was mentioned with assigning non-agents negligible moral value, it does lead to higher standards, but those standards extend to oneself, potentially to one's benefit. Once you make a distinction of what the acts of a non-agent look like, you start more consistently trying to justify everything you say or do yourself. Reminds me a bit of "Would an idiot do that?' And if they would, I do not do that thing."

I can still rather easily choose to view people as agents and assign moral value in any context where I have to make a decision, so I don't think having a significantly reduced moral value for others is to my detriment: it just removes the pressure to find a justification for their actions.

This will constitute my first comment on Less Wrong, so thank you for the interesting topic, and please inform me of any errors or inconveniences in my writing style.