Posts

Sorted by New

Wiki Contributions

Comments

interesting, so you are dividing morality into impact on immigrants and the idea that they should be allowed to join us a a moral right, with the former included in your analysis and the latter not.

putting aside positions, from a practical perspective it seems that drawing that line will remain difficult because "impact to immigrants" likely informs the very moral arguments I think you're trying to avoid. Or in other words, putting that issue (effect on immigrants) within the costs/benefits analysis requires some of the same subjective considerations that plague the moral argument (both in terms of difficulty in resolving with certainty and the idea of avoiding morality).

Regardless, seems like the horse has been dead for hours (my fault!). Thanks for engaging with me.

nope, i'm just asking why you think that the moral argument should be ignored, and why that position is obvious. we're talking about a group of humans and what laws and regulations will apply to their lives, likely radically changing them. these decisions will affect their relatives, who may or may not be in similar positions themselves. when legislating about persons, it seems there is always some relevance as to how the laws will affect those people's lives, even if broader considerations (value to us/cost to us as a country) are also relevant.

to be clear, i'm NOT saying you're wrong. I'm asking you why you think you're right, particularly since its so obvious.

EDIT: i totally appreciate i jumped in mid-conversation and asked a question which is now a chain and that might come off as odd to you, so sorry -- you asked about my point -- fair question, I'm not sure I really have one other than understanding your point of view. perhaps silly, but thought you made an interesting point and wanted to see how you thought through the issue before you made it. a "non-expert" can't tell anyone they're wrong, can only try to learn why others think they are right :).

if we confess that 'right' lives in a world of physics and logic - because everything lives in a world of physics and logic - then we have to translate 'right' into those terms somehow.

A different perspective i'd like people's thoughts on: is it more accurate to say that everything WE KNOW lives in a world of physics and logic, and thus translating 'right' into those terms is correct assuming right and wrong (fairness, etc.) are defined within the bounds of what we know.

I'm wondering if you would agree that you're making an implicit philosophical argument in your quoted language -- namely that necessary knowledge (for right/wrong, or anything else) is within human comprehension, or to say it differently, by ignoring philosophical questions (e.g. who am i and what is the world, among others) you are effectively saying those questions and potential answers are irrelevant to the idea of right/wrong.

If you agree, that position, though most definitely reasonable, cannot be proven within the standards set by rational thought. Doesn't the presence of that uncertainty necessitate consideration of it as a possibility, and how do you weigh that uncertainty against the assumption that there is none?

To be clear, this is not a criticism. This is an observation that I think is reasonable, but interested to see how you would respond to it.

Look, there is no doubt an equivalency in your method in that "they should join us" is put on the backburner along with "we should penalize them." I'm simply highlighting this point.

Or to put it another way, the moral statement I'm trying to make is that the moral value of absolutist moral considerations is less than utilitarian concerns in regards to costs/benefits. I don't actually care about moral arguments for or against immigration that aren't consequentalist.

In limiting the "consequentialist" argument to the "home country's" benefits and costs, you've by default given credence to the idea that "they should be penalized" in that you're willing to avoid penalizing them if they add value to your country -- another way of looking at it is to say those that want the immigrants to "join us" aren't benefited in any way by saying that the opposite moral argument was ignored. You've softened your statement now by using "moral value....is less," but you're actually going further than that -- you're saying that the utilitarian concerns on cost/benefits are SO GREAT relative to the moral issues that the moral issues should be ignored completely (or that's how your solution plays out). This is a bold statement, irrespective of its merits. How else would you interpret your statement?:

Immigration would be much better if we approached the issue of "How much do immigrants cost us vs how much do we benefit from them" and made laws in light of this, instead of approaching it from the moral difference between "This is our home and we shouldn't let strangers in" or "Freedom means allowing anyone to join us".

Your point only works if you completely ignore the moral argument. Once it matters even a little, the luxuries offered by cost/benefit analysis are thrown out the window because you now have a subjective consideration to incorporate that makes choices difficult. Again, just highlighting the consequences of your argument, don't really have an opinion on your particular argument.

Part of the problem with politics is we just say things and don't think about what they mean, since our focus is more on being right and presuming the potential certainty rather than understanding the sources and consequences of various political arguments and appreciating the inherent uncertainty that is unavoidable with any governance regime (or so I would argue).

I think you're implicitly making an moral statement (putting aside whether its "correct"). Your focus on "costs to us and how much do we benefit" means we downplay or eliminate any consideration of the moral question. However, ignoring the moral question has the same effect as losing the moral argument to "this is our home and we shouldn't let strangers in" -- in both cases the moral argument for "joining us" is treated as irrelevant. I'm not making an argument, just an observation i think is relevant if considering the issue.

DeFranker -- many thanks for taking the time, very helpful.

I spent last night thinking about this, and now I understand your (LW's) points better and my own. To start, I think the ideas of epistemic rationality and instrumental rationality are unassailable as ideas -- there are few things that make as much sense as the ideas of what rationality is trying to do, in the abstract.

But, when we say "rationality" is a good idea, I want to understand two fundamental things: In what context does rationality apply, and where it applies, what methodologies, if any, apply to actually practice it. I don't presuppose any answers to the above -- at the same time I don't want to "practice rationality" unless or before i understand how those two questions are answered or dealt with (I appreciate its not your responsibility to answer them, I'm just expressing them as things I'm considering).

"Weaknesses" of rationality is not an appropriate question -- I now understand the visceral reaction -- However, by putting rationality in context, one can better understand its usefulness from a practical perspective. Any lack of usefulness, or lack of applicability would be the "weakness/criticism" I was asking about, but upon reflection, I get to the same place by talking about context.

Let me step back a bit to explain why I think these questions are relevant. We all know the phrase "context matters" in the abstract -- I would argue that epistemic rationality, in the abstract, is relevant for instrumental rationality because if our model of the world is incorrect, the manner in which we choose to reach our goals in that world will be affected. All I'm really saying here is that "context matters." Now while most agree that context matters with respect to decision making, there's an open question as to "what context actually matters. So, there is always a potential debate regarding whether the the world is understood well enough and to the extent necessary in order to successfully practice instrumental rationality -- this is clearly a relative/subjective determination.

With that in mind, any attempt to apply instrumental rationality would require some thought about epistemic rationality, and whether my map is sufficient to make a decision. Does rationality, as it is currently practice, offer any guidance on this? Lets pretend the answer is no -- that's fine, but then that's a potential "flaw" in rationality or hole where rationality alone does not help with an open issue/question that is relevant.

I'm not trying to knock rationality, but I'm not willing to coddle it and pretend its all there is to know if it comes at the cost of minimizing knowledge.

Great, thanks, this is helpful. Is the answer to the above questions, as far as you practice rationality, the same for instrumental rationality? it is an idea -- but no real methodology? in my mind it would seem decision theory could be a methodology by which someone could practice instrumental rationality. To the extent it is, the above questions remain relevant (only in the sense they should be considered,

I now have an appreciation of your point -- I can definitely see how the question "what are the flaws with epistemic rationality" could be viewed as an meaningless question -- I was thinking about epistemic rationality as more than just an idea -- an idea WITH a methodology. Clearly the idea is unassailable (in my mind anyway), but methodologies (whether for rationality or some other purpose) could at least in concept have flaws, or perhaps flaws in that they cannot be applied universally -- it was this that I was asking about.

Interestingly, your response raises a different question. If epistemic rationality is an idea, and not a methodology, rationality (as it is discussed here) leaves open the possibility that there could be a methodology that may apply/help with practicing epistemic rationality (i.e. consistent with the IDEA of rationality, but a methodology by which you can practice it).

As I think most appreciate, ideas ( not necessarily with respect to rationality, but generally) suffer from the fact that they are general, and don't give a user a sense of "what to do" -- obviously, getting your map to match reality is not an easy task, so methodologies for epistemic rationality in the abstract could be helpful so as to put the idea to practice.

This is particularly important if you're practicing instrumental rationality -- This type of rationality is practiced "in the world," so having an accurate (or accurate enough) model is seemingly important to ensure that the manner in which you practice instrumental rationality makes sense.

Thus, a possible shortcoming of instrumental rationality could be that it depends on epistemic rationality, but because there isn't a clear answer to the question of "what is real," instrumental rationality is limited to the extent our beliefs regarding "what is real" are actually correct. You could say that instrumental rationality, depending on the circumstances, does not require a COMPLETE understanding of the world, and so my observation, even if fair, must be applied on a sliding scale.

no -- im not saying your goals ought to be anything, and i'm not trying to win an argument, but appreciate you will interpret my motives as you see appropriate.

let me try this differently -- there is an idea on LW that rationality is a "good" way to go about thinking [NOTE: correct me if i'm wrong]. By rationality, I mean exactly what is listed here:

Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed "truth" or "accuracy", and we're happy to call it that. Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".

My question relates to putting these two ideas/points into context, but with more of a focus on epistemic rationality (because it seems you need to know the world (i.e. context) in which you're making decisions before you apply instrumental rationality) -- is epistemic rationality practiced through a methodology? (probability theory/decision theory/something else?) or is the description above just an idea that is to be applied generically, e.g. just taking into account cognitive biases? If its just a description of an idea, then does that mean you cannot really "apply" it, you more just try to keep the general tenets in mind when thinking about things?

if theres a methodology (or multiple) to be used to practice epistemic rationality, does that methodology(ies) apply to help understand all aspects of "reality" (again, keying off EY's definition)? [NOTE: It seems reality, if it could be understood, would mean the broadest understanding of who we are, why we are here, and how our world works day-to-day. Is LW using a different definition of reality?] If more than one methodology could apply depending on the situation, how do you distinguish between those methodologies?

If the "chosen" methodology(ies) for epistemic rationality is NOT appropriate for certain decisions, what alternatives are to be used? Also, how do you describe the distinction between the decisions for which the chosen methodology(ies) works and those decisions for which it does not?

To be clear, I'm asking to get context for how rationality fits within the larger picture of the universe, including all of its uncertainty. I realize you may not have answers to all these questions and that there may not be consensus about any of it -- thats more than fine since all i'm looking for is responses, i don't care what they actually are. for example, you or others may make certain assumptions for certain of the questions to make necessary simplifications/etc. - all of that is fine, I just think the questions need to be considered before you can credibly apply (or seek to apply) rationality, and want to see if you've thought about them and if so, how you've handled them. If I'm being unreasonable or missing something with my questions, so be it, but i'd be interested in your thoughts.

DeFranker, thanks for the detailed note -- I take your points, they are reasonable and fair, but want to share a different perspective.

The problem I'm having is that I'm not actually making any arguments as "correct" or saying any of you people are wrong. The observation/statement for the sake of discussion does not mean that there is a conclusory judgment attached to it. Now, to the extent that you say i need to have a better understanding to make dissenting points, fair, but all I want to know is what the weakest arguments against rationality are, and question what relevance those weaknesses, if any, on the determination about the amount of time and energy to be spent on rational choice theory, as opposed to another theory or no theory. This seems particularly appropriate with respect to THIS article -- which asks that believers of a theory question the weakest positions of that theory -- whether in application or whatever. This is an analysis for believers to perform. Again, I'm not saying you don't have any strong arguments to weaker positions or that you even have weak positions -- I'm asking how those that follow rationality have approached this question/issue and how they've disposed of it.

It would seem those that follow a theory have the greatest responsibility to consider the strongest arguments against that very theory (which is exactly why EY posted the article re: Judaism). Why is it so inappropriate to hold rationality to the same standard? I'm not presupposing an answer, I just want to know YOUR answer is so i better understand your point of view. Perhaps your answer is "its obvious this theory is correct," without more. I would be fine with that simply because you've answered the question -- you've given me your perspective. Sure, I may ask additional questions, but the goal is not to be right or win some online war, the goal is to learn (my effing name is "non-expert" -- you dont' have to worry about me telling you that you're wrong, but i may question your logic/reason/etc.) I cannot learn unless I understand the perspectives of those that disagree with me.

And regarding the quoted text -- yes, while i appreciate i did not follow the "culture" or norms of this site, I had looked at this site as a place for substantive answers/discussions. I'm not making a fully general counterargument -- I'm simply pointing out that attacking my jokes/jabs allows you to avoid my question -- again, to be clear, I didn't ask the question to prove you're wrong, I'm asking the question to hear your answer!

How has Rationality, as a universal theory (or near-universal) on decision making, confronted its most painful weaknesses? What are rationality's weak points? The more broad a theory is claimed to be, the more important it seems to really test the theory's weaknesses -- that is why I assume you bring up religion, but the same standard should apply to rationality. This is not a cute question from a religious person, more of an intellectual inquiry from a person hoping to learn. In honor of the grand-daddy of cognitive biases, confirmation bias, doesn't rational choice theory need to be vetted?

HungryTurtle makes an attempt to get to this question, but he gets too far into the weeds -- this allowed LW to simply compare the "cons" of religion with the "cons" of rationality -- this is a silly inquiry -- I don't care how the weaknesses of rationality compares to the weaknesses of Judaism because rational theory, if universally applicable with no weaknesses, should be tested on the basis of that claim alone, and not its weaknesses relative to some other theory.

NOTE: re-posting without offending language in the hopes i dont need to create a new name. looks like i lost on my instrumental rationality point, got downvoted enough to get be restricted. on the bright side I am learning to admit i'm wrong (i was wrong to misread whether i'd offend LW, which prevented me from engaging with others on substantive points i'm trying to learn more about).

Load More