All of non-expert's Comments + Replies

interesting, so you are dividing morality into impact on immigrants and the idea that they should be allowed to join us a a moral right, with the former included in your analysis and the latter not.

putting aside positions, from a practical perspective it seems that drawing that line will remain difficult because "impact to immigrants" likely informs the very moral arguments I think you're trying to avoid. Or in other words, putting that issue (effect on immigrants) within the costs/benefits analysis requires some of the same subjective conside... (read more)

nope, i'm just asking why you think that the moral argument should be ignored, and why that position is obvious. we're talking about a group of humans and what laws and regulations will apply to their lives, likely radically changing them. these decisions will affect their relatives, who may or may not be in similar positions themselves. when legislating about persons, it seems there is always some relevance as to how the laws will affect those people's lives, even if broader considerations (value to us/cost to us as a country) are also relevant.

to be ... (read more)

0drethelin
So from my point of view the moral argument is as I stated it earlier: We either should or should not allow immigrants because of moral laws. This argument is stupid because it is not based on consequences or information. Your point seems to be that the consequentialist point of view should take into account the impact on immigrants, which is different than what I meant by the moral argument. I'm pretty sure I agree with yours. A country is made up out of people. The costs/benefits to those people are a subset of the costs/benefits to a country, and should be factored into same.

if we confess that 'right' lives in a world of physics and logic - because everything lives in a world of physics and logic - then we have to translate 'right' into those terms somehow.

A different perspective i'd like people's thoughts on: is it more accurate to say that everything WE KNOW lives in a world of physics and logic, and thus translating 'right' into those terms is correct assuming right and wrong (fairness, etc.) are defined within the bounds of what we know.

I'm wondering if you would agree that you're making an implicit philosophical ar... (read more)

Look, there is no doubt an equivalency in your method in that "they should join us" is put on the backburner along with "we should penalize them." I'm simply highlighting this point.

Or to put it another way, the moral statement I'm trying to make is that the moral value of absolutist moral considerations is less than utilitarian concerns in regards to costs/benefits. I don't actually care about moral arguments for or against immigration that aren't consequentalist.

In limiting the "consequentialist" argument to the "h... (read more)

0drethelin
What point are you trying to make? I'm really not sure. Completely ignoring the "Moral argument" seems obviously the correct thing to do, so I have to assume I'm misinterpreting what you mean by the moral argument.

I think you're implicitly making an moral statement (putting aside whether its "correct"). Your focus on "costs to us and how much do we benefit" means we downplay or eliminate any consideration of the moral question. However, ignoring the moral question has the same effect as losing the moral argument to "this is our home and we shouldn't let strangers in" -- in both cases the moral argument for "joining us" is treated as irrelevant. I'm not making an argument, just an observation i think is relevant if considering the issue.

0drethelin
I don't see why this treats the moral argument for joining us as any less relevant than the moral argument for not joining us. And yes, this does downplay or eliminate consideration of the moral question, which is what I was going for. Or to put it another way, the moral statement I'm trying to make is that the moral value of absolutist moral considerations is less than utilitarian concerns in regards to costs/benefits. I don't actually care about moral arguments for or against immigration that aren't consequentalist.

DeFranker -- many thanks for taking the time, very helpful.

I spent last night thinking about this, and now I understand your (LW's) points better and my own. To start, I think the ideas of epistemic rationality and instrumental rationality are unassailable as ideas -- there are few things that make as much sense as the ideas of what rationality is trying to do, in the abstract.

But, when we say "rationality" is a good idea, I want to understand two fundamental things: In what context does rationality apply, and where it applies, what methodolog... (read more)

Great, thanks, this is helpful. Is the answer to the above questions, as far as you practice rationality, the same for instrumental rationality? it is an idea -- but no real methodology? in my mind it would seem decision theory could be a methodology by which someone could practice instrumental rationality. To the extent it is, the above questions remain relevant (only in the sense they should be considered,

I now have an appreciation of your point -- I can definitely see how the question "what are the flaws with epistemic rationality" could b... (read more)

0TheOtherDave
Agreed that it's a lot easier to talk about flaws in specific methodologies than flaws in broad goals. Agreed that a decision theory is a methodology by which someone could practice instrumental rationality, and there's a fair amount of talk around here about what kinds of decision theories are best in what kinds of scenarios. Most of it goes over my head; I don't really know what it would mean to apply the different decision theories that get talked about here to real-world situations. Agreed that there could be a methodology that may apply/help with practicing epistemic rationality. Or many of them. Agreed that in the absence of complete information about the world, our ability to maximize expected value will always be constrained, and that this is a shortcoming of instrumental rationality viewed in isolation. (Not so much when compared to alternatives, since all the alternatives have the same shortcoming.)

no -- im not saying your goals ought to be anything, and i'm not trying to win an argument, but appreciate you will interpret my motives as you see appropriate.

let me try this differently -- there is an idea on LW that rationality is a "good" way to go about thinking [NOTE: correct me if i'm wrong]. By rationality, I mean exactly what is listed here:

Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond t

... (read more)
2TheOtherDave
A lot depends on how broad a brush I understand the word "methodology" to cover, but if I'm correctly understanding what you mean by the term, no, there's no particular methodology for how to practice epistemic rationality; it's more like what you refer to as "trying to keep the general tenets in mind while thinking about things". That said, I suppose there are common practices you could call endorsed methodologies if you were in the mood. For example, attaching confidence intervals to estimates and predictions is a practice you'll see a lot around here, with the implied (though not formalized) associated practice of comparing those estimates/predictions with later measurements, and treating an underconfident accurate prediction as a failure of prediction (that is, an event that ought to trigger recalibration).

DeFranker, thanks for the detailed note -- I take your points, they are reasonable and fair, but want to share a different perspective.

The problem I'm having is that I'm not actually making any arguments as "correct" or saying any of you people are wrong. The observation/statement for the sake of discussion does not mean that there is a conclusory judgment attached to it. Now, to the extent that you say i need to have a better understanding to make dissenting points, fair, but all I want to know is what the weakest arguments against rationali... (read more)

2DaFranker
Now, I agree with most of what you said here. However, some of it doesn't quite parse for me, so here's my attempt at resolving what seems like communication issues. This doesn't really tell me anything about what you want to know, even assuming you mean "strongest arguments against rationality" and/or "weakest arguments for rationality". Arguments for something are usually coupled with a claim - they are arguments for a claim. Which specific claim are you referring to when you use the word "rationality" in the claim above? I'm not asking a trick question, I just can't tell what you mean out of several hundreds of thousands of possible things you could possibly be thinking about. Sometimes, it could also be for or against a specific technique, where it is implied that the claim is "you should use this technique". To me, the phrase "arguments for and against rationality" makes as much sense as the phrase "arguments for and against art" or the phrase "arguments for and against numbers". There's some missing element, some missing piece of context that isn't obvious to me and that wasn't mentioned explicitly. Here are some attempts at guessing what you could mean, just as an exercise for me and as points of comparison for you: * "What are the strongest arguments against using bayesian updating to form accurate models of the world?" (i.e. The strongest arguments against the implied claim that you should use bayesian updating when you want to form accurate models of the world - this is the standard pattern.) * "What are the strongest arguments against the claim that forming accurate models of the world is useful towards achieving your goals?" * "What are the strongest arguments against the claim that forming accurate models of the world is useful to me?" * "What are the strongest arguments against the use of evidence to decide on which beliefs to believe?" * "What are the strongest arguments against the usefulness or accuracy of probabilities in general as oppos

How has Rationality, as a universal theory (or near-universal) on decision making, confronted its most painful weaknesses? What are rationality's weak points? The more broad a theory is claimed to be, the more important it seems to really test the theory's weaknesses -- that is why I assume you bring up religion, but the same standard should apply to rationality. This is not a cute question from a religious person, more of an intellectual inquiry from a person hoping to learn. In honor of the grand-daddy of cognitive biases, confirmation bias, doesn't rati... (read more)

3TheOtherDave
If you want to start a discussion about the weaknesses of rationality based on the assumption that understanding reality is the correct thing to value, I recommend you just do that. Asking me what my goals are in the context of insisting that my goals ought to be to understand reality, just confuses the issue. Coupled with your insistence that you're just asking questions and all this talk about winning and crushing dissent and whatnot, the impression I'm left with is that you're primarily interested in winning an argument, and not being entirely honest about your motives.
5DaFranker
(This comment is entirely about the meta-subject and your approach to this discussion, and doesn't engage with your dialogue with TheOtherDave.) This is, in local parlance, called a Fully General Counterargument. It does not engage with the arguments we present at all, does not present any evidence that its claim might be true, but applies optimized sophistry to convince an audience that its claim is true and the alternatives untrue. The response blocker is an anti-troll functionality, and does more good than harm to the epistemic hygiene of the community (as far as I can tell). Dissent is not crushed - if the community norms are respected, even very contrarian arguments can be massively upvoted. However, this usually requires more research, evidence and justification than non-contrarian arguments, because according to the knowledge we have an opinion that disagrees with us starts with a lower credibility prior, and this prior needs more evidence to be brought up to the same level of credibility as other arguments that the community is neutral or positive about. We¹ understand that it can be frustrating to someone who really wants to discuss and is interested to be blocked off like this, but this also seems to double-time as a filter for new users. New users that cannot muster the patience to deal with this issue are very unlikely to be mature and respectful enough to participate productively on LessWrong, since many of the relevant behaviors do correlate. The best way "around" the block that prevents you from responding to comments is to PM users directly, and if something you want to say is of public interest it is usually recommended to ask a more neutral participant of the discussion or someone you believe will represent and transmit your message well to post what you have to say for you. Some users have even experimented a bit with this in the past and shown that changing the username that posts something does change the way even LW users will read and int

Thanks, EY. I am asking a real question in that i want to know what people think of the question.

As a person that does not think rationality is as useful or as universal as people do on this site, I am at a disadvantage in that i'm in the minority here, however, I'm still posting/reading to question myself through engaging with those I disagree with. I seek the community's perspective, not to necessarily believe it or label it correct/wrong, but simply to understand it. My personal experience (with this name and old ones) has been that people general... (read more)

Thanks. I don't mean any weaknesses in particular, the idea laid out by EY was to confront your greatest weaknesses, so that is something for those that follow the theory to look into -- I'm just exploring :).

I guess what I'm not following is this idea of "choosing" an approach. Implicit in your answer I think is the idea that there is a "best" approach that must be discovered among the various theories on living life -- why does the existence of theory that is the "best" indicative that it is universally applicable? The goal... (read more)

0TheOtherDave
No, I don't think that's implied. We do make decisions, and some processes for making decisions lead to different results than other processes, and some results are better than others. It doesn't follow that there's a single best approach, or that such an approach is discoverable, or that it's worthwhile to search for it. Is that the goal? I'm not sure it is. As above, I neither agree that understanding reality is a singularly important terminal goal, nor that finding the "best theory" for achieving my goals is a particularly high-priority instrumental goal. So, mostly, I feel like this entire comment is orthogonal to anything I actually said, and you're putting a lot of words in my mouth here. You might do better to just articulate what you believe without trying to frame it as a reply to my comment.
0DaFranker
I'm not sure what you mean. In such a case, rationality dictates that IFF you truly want to understand reality, you should find that "more" that is needed and use it instead of rationality. This is the rational course of action. Therefore it is rational to do that thing "instead of" doing rationality. Thus being rational means doing this thing that leads to understanding reality. This seems to imply that if you keep recursively applying rationality to your own application of rationality, you end up finding that that which leads with highest probability to the desired goal is always rationality.

As a matter of policy, I always downvote any comment that includes anything like your final paragraph.

5jooyous
I see these wonky pseudo-threats around the site a lot and they're really confusing to me. Of course I'm biased! I'm a human! Just because I'm hanging around this site doesn't mean I've cleared all my biases and now become a perfect rational agent. On one hand, I do want to weed out cognitive biases in situations where they're hindering my decision-making in important areas of my life. On the other hand, I still have a lot of information to sift through in the real world, so maybe some of the shortcuts my brain uses are pretty handy to keep around. One example of these would be "talk to people that don't threaten you; discourage people that do." Sure, this might be filtering out some potentially good discussions, but it still seems like a pretty good heuristic to me, especially out there in scary meatspace. ^_^
4DaFranker
First off: This is usually considered a very bad sign and to be against community norms and/or ethics. Many people would/will downvote your comment exclusively because of the quoted paragraph. My first impulse was to do so, but I'm overriding it in favor of this response and in light of the rest of your comment, which seems like a habit of reasoning to be strongly encouraged, regardless of other things I'll get to in a minute. So, first, before any productive discussion of this can be done (edit: from my end, at least), I have to be reasonably confident that you've read and understood "What Do We Mean By "Rationality"?", which establishes as two separate functions what I believe you're referring to when you say "Rationality as a (near-)universal theory on decision-making." Alright. Now, assuming you understand the point of that post and the content of "rationality", could you help me pinpoint your exact question? To me, "How has Rationality confronted its most painful weaknesses?" and "What are rationality's weak points?" are incoherent questions - they seem Mysterious - to the same extent that one could ask the same questions of thinking, of existence, of souls, of the Peano Axioms, or of basically anything that requires more context to properly compute those questions for. If you're trying to question the usefulness of the function "be instrumentally rational", then the most salient weakness is that it is theoretically possible that a human could attempt to be instrumentally rational, end up applying it inexactly or inefficiently, waste time, not recurse to a high enough stack, or a slew of other mistakes. The second most important is that sometimes, even a human properly applying the principles of instrumental rationality will find out that their values are more easily fulfilled by doing something else and not applying instrumental rationality - at which point, because they are applying instrumental rationality and the function "be instrumentally rational" i

On the distant chance that you're actually attempting to be reasonable and are just messing it up, I downvoted this post because I automatically downvote everything that tries to Poison the Well against being downvoted. Being preemptively accused of confirmation bias is itself sufficient reason to downvote.

3[anonymous]
Well, here are my assessments of rationalities weakest points, from what I have read on Less Wrong so far. (That means some of these use "Rationality" when "Less Wrong" may be better used, which could be a crippling flaw in my list of weaknesses.) It sounds like you may be looking for something like these: 1: Several aspects of rationality require what appears to be a significant amount of math and philosophy to get right. If you don't understand that math or the philosophy, you aren't really being a rationalist, you're more just following the lead of other rationalists because they seem smart, or possibly because you liked their books, and possibly cheerleading them on on occasion. Rationality needs more simpler explanations that can be easily followed by people who are not in the top 1% of brain processing. 2: Rationality also requires quick explanations. Some people who study rationality realize this: When attempting to automate their decision theory, consider the problem "How do we make a computer that doesn't turn the entire universe into more computer to be absolutely sure that 2+2 is 4?" is considered a substantial problem. Quick answers tend to be eschewed, for ever more levels of clarity which take an increasing large amount of time, and even when confronted with the obvious problem that going for nothing but clarity causes, Rationalists consider it to be something that requires even more research into how to be clear. 3: Rationality decision problems really don't rise above the level of religion. Consider that in many rationality decision problems, the first thing that Rationalists do is presuppose "Omega" who is essentially "God" with the serial numbers filed off. Infinite (Or extremely high) utility and disutility are thrown around like so many parallels of Heaven and Hell. This makes a lot of rationality problems the kind of thing that those boring philosophers of the past (that Rationalists are so quick to eschew) have discussed ad nauseum. It's ha
5Qiaochu_Yuan
This is not the only hypothesis that downvotes to this post or failures to respond provide evidence for. It also provides evidence, in the Bayesian sense, that people think you're a troll, or that your writing is suboptimal, or that only a few people managed to see this post in the first place, or... etc. Anyway, it is not entirely clear to me what you mean by "rationality," but I'll use a caricature of it, namely "use Bayes' theorem and then do the thing that maximizes expected utility." One big problem is what your priors should be. Probably no human in the world actually uses Solomonoff induction (and it is still not entirely clear to me that this is a good idea), so whatever else they're using is an opportunity for bias to creep in. Another big problem is how you should actually use Bayes' theorem in practice. Any given observation contains way more information in it than you can reasonably update on, so you need to make some modeling decisions and privilege certain kinds of information above others, then find some kind of reasonable procedure for estimating likelihood ratios, and these are all more opportunities for bias to creep in. And a third big problem is how to actually compute utilities. Before you do this you need to address the question of whether humans even have utility functions, whether they should aspire to have utility functions (whatever "should" means here), and if so, what your utility function is... These are all big problems. In response I would say that ideal decision-making is not a thing that we can do, but understanding more about what the ideal looks like can help us move our decision-making closer to ideal.
5TheOtherDave
If by "Rationality, as a universal theory (or near-universal) on decision making" you mean using Bayes' Theorem as a way of determining the likelihood of various potential events and consequently estimating the expected value of various courses of action, which is something that "rationality" sometimes gets used to mean on this site, I'd say (as many have said before me) that one big weakness is the lack of reliable priors. A mechanism for precisely calculating how much I should update P(x) from an existing made-up value based on things I don't necessarily know doesn't provide me with much guidance in my day-to-day life. Another big weakness is computational intractability. If you mean more broadly making decisions based on the estimated expected value of various courses of action, I suppose the biggest weakness is again computational intractability. Which in turn potentially leads to sloppiness like making simplifying assumptions that are so radically at odds with my real environment that my estimates of value are just completely wrong. If you mean something else, it might be useful if you said what you mean more precisely. It's worth noting explicitly that these weaknesses are not themselves legitimate grounds for choosing some other approach that shares the same weaknesses. For example, simply making shit up typically results in estimates of value which are even more wrong. But you explicitly asked about weaknesses in isolation, rather than reasons to pick one decision theory over another.

ok, wasn't trying to play "gotcha," just answering your question. good chat, thanks for engaging with me.

you suggested that emotion hacking is more of an issue for instrumental rationality and not so much for epistemic rationality. to the extent that is wrong, you're ignoring emotion hacking (subjective factor) from your application of epistemic rationality.

1Qiaochu_Yuan
I'm happy to agree that emotion hacking is important to epistemic rationality.

sure. note that i don't offer this as conclusive or correct, but just something i'm thinking about. also, lets assume rational choice theory is universally applicable for decision making.

rational choice theory gives you an equation to use and all we have to do is fill that equation with the proper inputs, value them correctly, and you get an answer. Obviously this is more difficult in practice, particularly where inputs (as to be expected) are not easily convertible to probabilities/numbers -- I'm worried this is actually more problematic than we think... (read more)

0Qiaochu_Yuan
I agree that this is problematic but don't see what it has to do with what I've been saying.

Thanks for the clarification, now i understand.

Going back to the original comment i commented on:

emotion-hacking is mostly an instrumental technique (although it is also epistemically valuable to notice and then stop your brain from flinching away from certain thoughts).

Particularly with your third type of emotion hacking ("hacking your emotional responses to external stimuli"), it seems emotion hacking is vital for for epistemic rationality -- i guess that relates to my original point, that hacking emotions are at least as important for epi... (read more)

0Qiaochu_Yuan
Can you clarify what you mean by this?

I suppose there is a third kind of emotion-hacking, namely hacking your emotional responses to external stimuli.

isn't this the ONLY kind of emotion-hacking out there? what emotions are expressed irrespective of external stimuli? seems like a small or insignificant subset.

But it's not as if I can respond to other people's thoughts, even in principle: all I have access to are sounds or images which purport to be correlated to those thoughts in some mysterious way.

the second two paragraphs above are responding to this. sorry to throw it back at you, ... (read more)

2Qiaochu_Yuan
Let me make some more precise definitions: by "emotional responses to my thoughts" I mean "what I feel when I think a given thought," e.g. I feel a mild negative emotion when I think about calling people. By "emotional responses to my behavior" I mean "what I feel when I perform a given action," e.g. I feel a mild negative emotion when I call people. By "emotional responses to external stimuli" I mean "what I feel when a given thing happens in the world around me," e.g. I feel a mild negative emotion when people call me. The distinction I'm trying to make between my behavior and external stimuli is analogous to the distinction between operant and classical conditioning. No, I'm just making the point that for the purposes of classifying different kinds of emotion-hacking I don't find it useful to have a category for other people's thoughts separate from other people's behaviors (in contrast to how I find it useful to have a category for my thoughts separate from my behaviors), and the reason is that I don't have direct access to other people's thoughts. What problem?

All emotions are responses to external stimuli, unless your emotions relate only to what is going on in your head, without reference to the outside (i.e. outside your body) world.

I agree you can't respond to others' thoughts, unless they express them such that they are "behaviors." Interestingly, the "problem" you have with the sounds or images (or words?) which purport to be correlated to others' thoughts is the same exact issue everyone is having with you (or me).

if we're confident in our own ability to express our thoughts (i.e. the correlation problem is not an issue for you), then how much can we dismiss others' expressions because of that very same issue?

1Qiaochu_Yuan
I don't understand what point you're trying to make.

whose thoughts and whose behaviors? not disagreeing, just asking.

1Qiaochu_Yuan
My thoughts and my behaviors. I suppose there is a third kind of emotion-hacking, namely hacking your emotional responses to external stimuli. But it's not as if I can respond to other people's thoughts, even in principle: all I have access to are sounds or images which purport to be correlated to those thoughts in some mysterious way.

the best evidence that confirmation bias is real and ever-present is a website of similarly thinking people that values comments based on those very users' reactions. perhaps unsurprisingly, those that conform to the conventional thought are rewarded with points. so i guess while the point system doesn't actually work as a substantive matter, at least we are afforded a constant reminder that confirmation bias is a problem even among those that purport to take it into account.

of course, my poking fun will only work so long as i don't get so many negative... (read more)

what you're saying is obviously true, but it goes beyond the information available. the question, limited the facts given, is representative of a larger point, which is the one I'm trying to explain as a general observation and is not limited to whether in fact That tree fell and made a noise.

btw, I never thanked you for our previous back and forth -- it was actually quite helpful, and your last comment in our discussion has kept me thinking for a couple weeks now, and perhaps in a couple more i will respond!

what is the basis for the position that knowledge of the world must come from analytical/probabilistic models? I'm not questioning the "correctness" of your view, only wondering your basis for it. It seems awfully convenient that a type of model that yields conclusions is in fact the correct one -- put another way, why is the availability of a clear methodology that gives you answers indicative of its universal applicability in attaining knowledge?

traditional philosophy, as you correctly point out, has failed to bridge its theory to practice -... (read more)

1non-expert
the best evidence that confirmation bias is real and ever-present is a website of similarly thinking people that values comments based on those very users' reactions. perhaps unsurprisingly, those that conform to the conventional thought are rewarded with points. so i guess while the point system doesn't actually work as a substantive matter, at least we are afforded a constant reminder that confirmation bias is a problem even among those that purport to take it into account. of course, my poking fun will only work so long as i don't get so many negative points that i can no longer question the conventional thought (gasp!). what is my limit? I'll make sure to conform just enough to stay on here. :) The worst part is I'm not even trying to troll, I'm trying to listen and question at the same time, which is how i thought I'm supposed to learn!
1TheOtherDave
This simply isn't true. There are lots of ways I can know a tree has fallen, even if nobody has heard the tree fall.

emotion-hacking seems far more important in epistemic rationality, as your understanding of the world is the setting in which you use instrumental rationality, and your "lens" (which presumably encompasses your emotions) is the key hurdle (assuming you are otherwise rational) preventing you from achieving the objectivity necessary to form true beliefs about the world.

3Qiaochu_Yuan
I suppose I should distinguish between two kinds of emotion-hacking: hacking your emotional responses to thoughts, and hacking your emotional responses to behaviors. The former is an epistemic technique and the latter is an instrumental technique. Both are quite useful.

With respect to your example, I can only play with those facts that you have given me. In your example, I assumed that knowledge of which vial has poison could not be known, and the best information we had was our collective beliefs (which are based on certain factors you listed). I agree with the task at hand as you put it, but the devil is of course in the details.

Which vial contains poison is a fact about the world, and there are a million other contingent facts about the world that go one way or another depending on it. Maybe the air around the vi

... (read more)
1TheOtherDave
Or at least approximated. Yes. Lovely. I would say, rather, that it has no purpose at all in the context of that question. Having a false belief is not a useful purpose. And, as I've said before, I agree that there exist questions without answers, and questions whose answers are necessarily beyond the scope of human knowledge, and I agree that rationality doesn't provide much value in engaging with those questions... though it's no worse than any approach I know of, either. As above, I submit that in all cases the approach I describe either works better than (if there are answers, which there often are) or as well (if not) as any other approach I know of. And, as I've said before, if you have a better approach to propose, propose it! I don't know that. But I have to make decisions anyway, so I make them using the best approach I know. If you think I should do something different, tell me what you think I should do. OTOH, if all you're saying is that my approach might be wrong, then I agree with you completely, but so what? My choice is still between using the best approach I know of, or using some other approach, and given that choice I should still use the best approach I know of. And so should you. For the record, that's also the consensus position here. The interesting question is, given that we don't have 100% certainty, what do I do now?

We can know what is Right, as long as we define it as "right according to human morals." Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI - so I would have a hard time calling them "subjective"

I don't dispute the possibility that your conclusion may be correct, I'm wondering the basis under which you believe your position to be correct. Put another way, why are moral truths NOT relative? How do you know this? Thinking something can be done is fin... (read more)

What do you disagree with? That "truth is relative" applies to only moral questions? or that it applies to more than moral questions?

If instead your position is that moral truths are NOT relative, what is the basis for that position? No need to dive deep if you know of something i can read...even EY :)

2MugaSofer
My position is that moral truths are not relative, exactly, but agents can of course have different goals. We can know what is Right, as long as we define it as "right according to human morals." Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI - so I would have a hard time calling them "subjective". Of course, an AI with limited reasoning capacity might judge wrongly, but then humans do likewise - see e.g. Nazis. EDIT: Regarding EY writings on the subject, he wrote a whole Metaethics Sequence, much of which is leading up to or directly discussing this exact topic. Unfortunately, I'm having trouble with the filters on this library computer, but it should be listed on the sequences page (link at top right) or in a search for "metaethics sequence".

I actually don't think we're using the word differently -- the issue was premised solely for issues where the answer cannot be known after the fact. In that case, our use of "confidence" is the same -- it simply helps you make decisions. Once the value of the decision is limited to the belief in its soundness, and not ultimate "correctness" of the decision (because it cannot be known), rationality is important only if you believe it to be correct way to make decisions.

-1MugaSofer
Indeed. And probability is confidence, and Bayesian probability is the correct amount of confidence.

Roughly speaking, I understood Mugasofer to be referring to a calculated value with respect to a proposition that ought to control my willingness to expose myself to penalties contingent on the proposition being false.

How is this different than being "comfortable" on a personal level? If it isn't, the only value of rationality where the answer cannot be known is simply the confidence it gives you. Such a belief only requires rationality if you believe rationality provides the best answer -- the "truth" is irrelevant. For exampl... (read more)

1TheOtherDave
Yes. The vial is either poisoned or it isn't, and my task is to decide whether to drink it or not. Do you deny that? Yes, I agree. Indeed, looking for systems to find answers that are better than the one I'm using makes sense, even if they aren't best, even if I can't ever know whether they are best or not. Sure. But "which vial is poisoned?" isn't one of them. More generally, there are millions of issues we face in our lives for which answers exist, and productive techniques for approaching those questions are worth exploring and adopting. This is where we disagree. Which vial contains poison is a fact about the world, and there are a million other contingent facts about the world that go one way or another depending on it. Maybe the air around the vial smells a little different. Maybe it's a different temperature. Maybe the poisoned vial weighs more, or less. All of those contingent facts means that there are different ways I can approach the vials, and if I approach the vials one way I am more likely to live than if I approach the vials a different way. And if you have a more survival-conducive way of approaching the vials than I and the other 999 people in the room, we do better to listen to you than to each other, even though your opinion is inconsistent with ours. Again, this is where we disagree. The relevance of "Truth" (as you're referring to it... I would say "reality") is also the extent to which some ways of approaching the world (for example, sniffing the two vials, or weighing them, or a thousand other tests) reliably have better results than just measuring the extent to which other humans agree with an assertion. Sure, that's true. But it's far more useful to better entangle our decisions (our "subjective truths," as you put it) with reality ("Truth") before we make those decisions.

I suspect that the word "confidence" is not being used consistently in this exchange, and you might do well to replace it with a more explicit description of what you intend for it to refer to.

i referenced confidence only because Mugasofer did. What was your understanding of how Mugasofer used "confident as we should be"? Regardless, I am still wondering what the value of being "right" is if we can't determine what is in fact right? If it gives confidence/ego/comfort that you've derived the right answer, being "righ... (read more)

1TheOtherDave
Roughly speaking, I understood Mugasofer to be referring to a calculated value with respect to a proposition that ought to control my willingness to expose myself to penalties contingent on the proposition being false. I'm not quite sure what "right" means, but if nothing will happen differently depending on whether A or B is true, either now or in the future, then there's no value in knowing whether A or B is true. Yes, pretty much. I wouldn't say "errs", but semantics aside, we're always forming probability judgments, and those judgments are always flawed (or at least incomplete) for any interesting problem. There are many decisions I'm obligated to make where the effects of that decision for good or ill will differ depending on whether the world is A or B, but where the question "is the world A or B?" has no clear answer in the sense you mean. For those decisions, it is useful to make the procedure I use as reliable as is cost-effective. But sure, given a question on which no such decision depends, I agree that withholding judgment on it is a perfectly reasonable thing to do. (Of course, the question arises of how sure I am that no such decision depends on it, and how reliable the process I used to arrive at that level of sureness is.) Yes, absolutely. Forming judgments based on a false idea of how much or how little we know is unlikely to have reliably good results. As above, there are many situations where I'm obligated to make a decision, even if that decision is to sit around and do nothing. If I have two decision procedures available, and one of them is marginally more reliable than the other, I should use the more reliable one. The value is that I will make decisions with better results more often. I'd say LW is willing to push rationality as the best "theory" in all cases short of perfect knowledge right up until the point that a better one comes along, where "better" and "best" refer to their ability to reliably obtain benefits. That's why I asked
-2MugaSofer
Because it helps us make decisions. Incidentally, replacing words that may be unclear or misunderstood (by either party) with what we mean by those words is generally considered helpful 'round here for producing fruitful discussions - there's no point arguing about whether the tree in the forest made a sound if I mean "auditory experience" and you mean "vibrations in the air". This is known as "Rationalist's Taboo", after a game with similar rules, and replacing a word with (your) definition is known as "tabooing" it.

Perspectivism provides that all truth is subjective, but in practice, this characterization has no relevance to the extent there is agreement on any particular truth. For example, "Murder is wrong," even if a subjective truth, is not so in practice because there is collective agreement that murder is wrong. That is all I meant, but agree that it was not clear.

0Peterdjones
Thanks for the clarifiction.
-3MugaSofer
Wait, does this "truth is relative" stuff only apply to moral questions? Because if it does then, while I personally disagree with you, there's a sizable minority here who wont.

Throwing your hands in the air and saying "well we can never know for sure" is not as accurate as giving probabilities of various results. We can never know for sure which answer is right, but we can assign our probabilities so that, on average, we are always as confident as we should be. Of course, humans are ill-suited to this task, having a variety of suboptimal heuristics and downright biases, but they're all we have. And we can, in fact, assign the correct probabilities / choose the correct choice when we have the problem reduced to a mathe

... (read more)
-2MugaSofer
Indeed. One of the purposes of this site is to help people become more rational - closer to a mathematical perfect reasoner - in everyday life. In math problems, however - and every real problem can, eventually, be reduced to a math problem - we can always make the right choice (unless we make a mistake with the math, which does happen.) Unfortunately for you, most of the basic introductory-level stuff - and much of the really good stuff generally - is by him. So I'm guessing there's a certain selection effect for people who enjoy/tolerate his style of writing. I'm still not sure how truth could be "relative" - could you perhaps expand on what you mean by that? - although obviously it can be obscured by biases and simple lack of data. In addition, some questions may actually have no answer, because people are using different meanings for the same word or the question itself is contradictory (how many sides does a square triangle have?) EDIT: A lot of people here - myself included - practice or advise testing how accurate your estimates are. There are websites and such dedicated to helping people do this.
5Peterdjones
Inasmuch as subjectivism is a form of relativism, those comments seem to contradict each other.
3TheOtherDave
I suspect that the word "confidence" is not being used consistently in this exchange, and you might do well to replace it with a more explicit description of what you intend for it to refer to. Yes, this community is generally concerned with methods for, as you say, getting "the right answer more often than not." And, sure, sometimes a marginal increase in my chance of getting the right answer isn't worth the cost of securing that increase -- as you say, sometimes "accurately identifying the proper inputs and valuing them correctly [...] is simply not practical" -- so I accept a lower chance of having the right answer. And, sure, complex contexts such as social relationships, politics, and economics are often cases where the cost of a greater chance of knowing the right answer is prohibitive, so we go with the highest chance of it we can profitably get. To say that "rationality falls short" in these cases suggests that it's being compared to something. If you're saying it falls short compared to perfect knowledge, I absolutely agree. If you're saying it falls short compared to something humans have access to, I'm interested in what that something is. I agree that expressing beliefs numerically (e.g., as probabilities) can lead people to assign more value to the answer than it deserves. But saying that it's "the best answer" has that problem, too. If someone tells me that answer A is the best answer I will likely assign more value to it than if they tell me they are 40% confident in answer A, 35% confident in answer B, and 25% confident in answer C. I have no idea what you mean by the truth being "relative".

Ok, yes, the idea of using probabilities raises two issues -- knowing you have the right inputs, and having the right perspective. Knowing and valuing the proper inputs to most questions seems impossible because of the subjectivity of most issues -- while Bayesian judgements may still hold in the abstract, they are often not practical to use (or so I would argue). Second, what do you think about the idea of "perspectivism" -- that there is only subjective truth in the world? You don't have to sign on completely to Nietzsche's theory to see its ... (read more)

0MugaSofer
Unreliable evidence, biased estimates etc. can, in fact, be taken into account. This. Throwing your hands in the air and saying "well we can never know for sure" is not as accurate as giving probabilities of various results. We can never know for sure which answer is right, but we can assign our probabilities so that, on average, we are always as confident as we should be. Of course, humans are ill-suited to this task, having a variety of suboptimal heuristics and downright biases, but they're all we have. And we can, in fact, assign the correct probabilities / choose the correct choice when we have the problem reduced to a mathematical model and apply the math without making mistakes. Oh, I'm not going to downvote your comments or anything. I just thought you might prefer your comments to be easier to read and avoid signalling ... well, disrespect, ignorance, crazy-ranting-on-the-internet-ness, and all the other low status and undesirable signals given off. Of course, I'm giving you the benefit of the doubt, but people are simply less likely to do so when you give off signals like that. This isn't necessarily irrational, since these signals are, indeed, correlated with trolls and idiots. Not perfectly, but enough to be worth avoiding (IMHO.)

i don't follow the relevance of article, as it seems quite obvious. the real problem with the black and white in the world of rationality is the assumption there is a universal answer to all questions. the idea of "grey" helps highlight that many answers have no one correct universal answer. what i dont understand about rationalists (LW rationalists) is that the live in a world in which everything is either right or wrong. this simplifies a world that is not so simple. what am i missing?

-1MugaSofer
Offtopic: Have you considered running your comments through a spell- and grammar- checker? It might help with legibility and signalling competence. Ontopic: Rationalists, or at least Bayesians, use probabilities, not binary right-or-wrong judgments. There is, mathematically, only one "correct" probability given the data; is that what you mean?