Comment author: drethelin 10 February 2013 06:30:19PM 0 points [-]

So from my point of view the moral argument is as I stated it earlier: We either should or should not allow immigrants because of moral laws. This argument is stupid because it is not based on consequences or information.

Your point seems to be that the consequentialist point of view should take into account the impact on immigrants, which is different than what I meant by the moral argument. I'm pretty sure I agree with yours. A country is made up out of people. The costs/benefits to those people are a subset of the costs/benefits to a country, and should be factored into same.

Comment author: non-expert 10 February 2013 06:52:54PM 0 points [-]

interesting, so you are dividing morality into impact on immigrants and the idea that they should be allowed to join us a a moral right, with the former included in your analysis and the latter not.

putting aside positions, from a practical perspective it seems that drawing that line will remain difficult because "impact to immigrants" likely informs the very moral arguments I think you're trying to avoid. Or in other words, putting that issue (effect on immigrants) within the costs/benefits analysis requires some of the same subjective considerations that plague the moral argument (both in terms of difficulty in resolving with certainty and the idea of avoiding morality).

Regardless, seems like the horse has been dead for hours (my fault!). Thanks for engaging with me.

Comment author: drethelin 10 February 2013 07:33:49AM 0 points [-]

What point are you trying to make? I'm really not sure. Completely ignoring the "Moral argument" seems obviously the correct thing to do, so I have to assume I'm misinterpreting what you mean by the moral argument.

Comment author: non-expert 10 February 2013 05:01:35PM *  1 point [-]

nope, i'm just asking why you think that the moral argument should be ignored, and why that position is obvious. we're talking about a group of humans and what laws and regulations will apply to their lives, likely radically changing them. these decisions will affect their relatives, who may or may not be in similar positions themselves. when legislating about persons, it seems there is always some relevance as to how the laws will affect those people's lives, even if broader considerations (value to us/cost to us as a country) are also relevant.

to be clear, i'm NOT saying you're wrong. I'm asking you why you think you're right, particularly since its so obvious.

EDIT: i totally appreciate i jumped in mid-conversation and asked a question which is now a chain and that might come off as odd to you, so sorry -- you asked about my point -- fair question, I'm not sure I really have one other than understanding your point of view. perhaps silly, but thought you made an interesting point and wanted to see how you thought through the issue before you made it. a "non-expert" can't tell anyone they're wrong, can only try to learn why others think they are right :).

Comment author: non-expert 10 February 2013 03:37:36AM *  1 point [-]

if we confess that 'right' lives in a world of physics and logic - because everything lives in a world of physics and logic - then we have to translate 'right' into those terms somehow.

A different perspective i'd like people's thoughts on: is it more accurate to say that everything WE KNOW lives in a world of physics and logic, and thus translating 'right' into those terms is correct assuming right and wrong (fairness, etc.) are defined within the bounds of what we know.

I'm wondering if you would agree that you're making an implicit philosophical argument in your quoted language -- namely that necessary knowledge (for right/wrong, or anything else) is within human comprehension, or to say it differently, by ignoring philosophical questions (e.g. who am i and what is the world, among others) you are effectively saying those questions and potential answers are irrelevant to the idea of right/wrong.

If you agree, that position, though most definitely reasonable, cannot be proven within the standards set by rational thought. Doesn't the presence of that uncertainty necessitate consideration of it as a possibility, and how do you weigh that uncertainty against the assumption that there is none?

To be clear, this is not a criticism. This is an observation that I think is reasonable, but interested to see how you would respond to it.

Comment author: drethelin 09 February 2013 08:17:50AM *  0 points [-]

I don't see why this treats the moral argument for joining us as any less relevant than the moral argument for not joining us. And yes, this does downplay or eliminate consideration of the moral question, which is what I was going for. Or to put it another way, the moral statement I'm trying to make is that the moral value of absolutist moral considerations is less than utilitarian concerns in regards to costs/benefits. I don't actually care about moral arguments for or against immigration that aren't consequentalist.

Comment author: non-expert 09 February 2013 08:15:50PM 0 points [-]

Look, there is no doubt an equivalency in your method in that "they should join us" is put on the backburner along with "we should penalize them." I'm simply highlighting this point.

Or to put it another way, the moral statement I'm trying to make is that the moral value of absolutist moral considerations is less than utilitarian concerns in regards to costs/benefits. I don't actually care about moral arguments for or against immigration that aren't consequentalist.

In limiting the "consequentialist" argument to the "home country's" benefits and costs, you've by default given credence to the idea that "they should be penalized" in that you're willing to avoid penalizing them if they add value to your country -- another way of looking at it is to say those that want the immigrants to "join us" aren't benefited in any way by saying that the opposite moral argument was ignored. You've softened your statement now by using "moral value....is less," but you're actually going further than that -- you're saying that the utilitarian concerns on cost/benefits are SO GREAT relative to the moral issues that the moral issues should be ignored completely (or that's how your solution plays out). This is a bold statement, irrespective of its merits. How else would you interpret your statement?:

Immigration would be much better if we approached the issue of "How much do immigrants cost us vs how much do we benefit from them" and made laws in light of this, instead of approaching it from the moral difference between "This is our home and we shouldn't let strangers in" or "Freedom means allowing anyone to join us".

Your point only works if you completely ignore the moral argument. Once it matters even a little, the luxuries offered by cost/benefit analysis are thrown out the window because you now have a subjective consideration to incorporate that makes choices difficult. Again, just highlighting the consequences of your argument, don't really have an opinion on your particular argument.

Part of the problem with politics is we just say things and don't think about what they mean, since our focus is more on being right and presuming the potential certainty rather than understanding the sources and consequences of various political arguments and appreciating the inherent uncertainty that is unavoidable with any governance regime (or so I would argue).

In response to comment by [deleted] on Politics Discussion Thread January 2013
Comment author: drethelin 09 January 2013 08:51:50PM 3 points [-]

Immigration would be much better if we approached the issue of "How much do immigrants cost us vs how much do we benefit from them" and made laws in light of this, instead of approaching it from the moral difference between "This is our home and we shouldn't let strangers in" or "Freedom means allowing anyone to join us".

Comment author: non-expert 09 February 2013 02:55:54AM 1 point [-]

I think you're implicitly making an moral statement (putting aside whether its "correct"). Your focus on "costs to us and how much do we benefit" means we downplay or eliminate any consideration of the moral question. However, ignoring the moral question has the same effect as losing the moral argument to "this is our home and we shouldn't let strangers in" -- in both cases the moral argument for "joining us" is treated as irrelevant. I'm not making an argument, just an observation i think is relevant if considering the issue.

Comment author: non-expert 06 February 2013 08:25:04PM 2 points [-]

How has Rationality, as a universal theory (or near-universal) on decision making, confronted its most painful weaknesses? What are rationality's weak points? The more broad a theory is claimed to be, the more important it seems to really test the theory's weaknesses -- that is why I assume you bring up religion, but the same standard should apply to rationality. This is not a cute question from a religious person, more of an intellectual inquiry from a person hoping to learn. In honor of the grand-daddy of cognitive biases, confirmation bias, doesn't rational choice theory need to be vetted?

HungryTurtle makes an attempt to get to this question, but he gets too far into the weeds -- this allowed LW to simply compare the "cons" of religion with the "cons" of rationality -- this is a silly inquiry -- I don't care how the weaknesses of rationality compares to the weaknesses of Judaism because rational theory, if universally applicable with no weaknesses, should be tested on the basis of that claim alone, and not its weaknesses relative to some other theory.

NOTE: re-posting without offending language in the hopes i dont need to create a new name. looks like i lost on my instrumental rationality point, got downvoted enough to get be restricted. on the bright side I am learning to admit i'm wrong (i was wrong to misread whether i'd offend LW, which prevented me from engaging with others on substantive points i'm trying to learn more about).

Comment author: Qiaochu_Yuan 05 February 2013 11:21:45PM 1 point [-]

I'm happy to agree that emotion hacking is important to epistemic rationality.

Comment author: non-expert 05 February 2013 11:30:17PM 1 point [-]

ok, wasn't trying to play "gotcha," just answering your question. good chat, thanks for engaging with me.

Comment author: Qiaochu_Yuan 05 February 2013 11:07:51PM 0 points [-]

I agree that this is problematic but don't see what it has to do with what I've been saying.

Comment author: non-expert 05 February 2013 11:19:45PM 1 point [-]

you suggested that emotion hacking is more of an issue for instrumental rationality and not so much for epistemic rationality. to the extent that is wrong, you're ignoring emotion hacking (subjective factor) from your application of epistemic rationality.

Comment author: Qiaochu_Yuan 05 February 2013 09:29:49PM 0 points [-]

I worry that rationality, to the extent it must value subjective considerations, tends to minimize the importance of those considerations to yield a more clear inquiry.

Can you clarify what you mean by this?

Comment author: non-expert 05 February 2013 10:46:08PM *  0 points [-]

sure. note that i don't offer this as conclusive or correct, but just something i'm thinking about. also, lets assume rational choice theory is universally applicable for decision making.

rational choice theory gives you an equation to use and all we have to do is fill that equation with the proper inputs, value them correctly, and you get an answer. Obviously this is more difficult in practice, particularly where inputs (as to be expected) are not easily convertible to probabilities/numbers -- I'm worried this is actually more problematic than we think. Once we have an objective equation as a tool, we may be biased to assume objectivity and truth regarding our answers, even though that belief often is based on the strength of the starting equation and not on our ability to accurately value and include the appropriate subjective factors. To the extent answering a question becomes difficult, we manufacture "certainty" by ignoring subjectivity or assuming it is not as relevant as it is.

Simply put, the belief we have a good and objective starting point biases us to believe we also can/will/actually derive an objectively correct answer, affecting the accuracy with which we fill in the equation.

Comment author: Qiaochu_Yuan 05 February 2013 08:51:58PM *  2 points [-]

isn't this the ONLY kind of emotion-hacking out there? what emotions are expressed irrespective of external stimuli? seems like a small or insignificant subset.

Let me make some more precise definitions: by "emotional responses to my thoughts" I mean "what I feel when I think a given thought," e.g. I feel a mild negative emotion when I think about calling people. By "emotional responses to my behavior" I mean "what I feel when I perform a given action," e.g. I feel a mild negative emotion when I call people. By "emotional responses to external stimuli" I mean "what I feel when a given thing happens in the world around me," e.g. I feel a mild negative emotion when people call me. The distinction I'm trying to make between my behavior and external stimuli is analogous to the distinction between operant and classical conditioning.

I thought you were questioning the value of considering/responding to others' thoughts, because you are arguing that even if you could, you would need to rely on their words and expressions, which may not be correlated with their "true" state of mind.

No, I'm just making the point that for the purposes of classifying different kinds of emotion-hacking I don't find it useful to have a category for other people's thoughts separate from other people's behaviors (in contrast to how I find it useful to have a category for my thoughts separate from my behaviors), and the reason is that I don't have direct access to other people's thoughts.

Interestingly, the "problem" you have

What problem?

Comment author: non-expert 05 February 2013 09:15:50PM 0 points [-]

Thanks for the clarification, now i understand.

Going back to the original comment i commented on:

emotion-hacking is mostly an instrumental technique (although it is also epistemically valuable to notice and then stop your brain from flinching away from certain thoughts).

Particularly with your third type of emotion hacking ("hacking your emotional responses to external stimuli"), it seems emotion hacking is vital for for epistemic rationality -- i guess that relates to my original point, that hacking emotions are at least as important for epistemic rationality as hacking emotions for instrumental rationality.

I raised the issue originally because I worry that rationality, to the extent it must value subjective considerations, tends to minimize the importance of those considerations to yield a more clear inquiry.

View more: Next