If you assume.... [y]ou are, in effect, stipulating that that outcome actually has a lower utility than it's stated to have.
Thanks, that focuses the argument for me a bit.
So if we assume those curves represent actual utility functions, he seems to be saying that the shape of curve B, relative to A makes A better (because A is bounded in how bad it could be, but unbounded in how good it could be). But since the curves are supposed to quantify betterness, I am attracted to the conclusion that curve B hasn't been correctly drawn. If B is worse than A, how ...
Sure, I used that as what I take to be the case where the argument would be most easily recognized as valid.
One generalization might be something like, "losing makes it harder to continue playing competitively." But if it becomes harder to play, then I have lost something useful, i.e. my stock of utility has gone down, perhaps by an amount not reflected in the inferred utility functions. My feeling is that this must be the case, by definition (if the assumed functions have the same expectation), but I'll continue to ponder.
The problem feels related to Pascal's wager - how to deal with the low-probability disaster.
Thanks very much for the taking the time to explain this.
It seems like the argument (very crudely) is that, "if I lose this game, that's it, I won't get a chance to play again, which makes this game a bad option." If so, again, I wonder if our measure of utility has been properly calibrated.
It seems to me like the expected utility of option B, where I might get kicked out of the game, is lower than the expected utility of option A, where this is impossible. Your example of insurance may not be a good one, as one insures against financial loss, b...
I think that international relations is a simple extension of social-contract-like considerations.
If nations cooperate, it is because it is believed to be in their interest to do so. Social-contract-like considerations form the basis for that belief. (The social contract is simply that which makes it useful to cooperate.) "Clearly isn't responsible for," is a phrase you should be careful before using.
You seem to be suggesting that [government] enables [cooperation]
I guess you mean that I'm saying cooperation is impossible without government. ...
Values start to have costs only when they are realized or implemented.
How? Are you saying that I might hold legitimate value in something, but be worse off if I get it?
Costlessly increasing the welfare of strangers doesn't sound like altruism to me.
OK, so we are having a dictionary writers' dispute - one I don't especially care to continue. So every place I used 'altruism,' substitute 'being decent' or 'being a good egg,' or whatever. (Please check, though, that your usage is somewhat consistent.)
But your initial claim (the one that I initially challenged) was that rationality has nothing to do with value, and is manifestly false.
If you look closely, I think you should find that legitimacy of government & legal systems comes from the same mechanism as everything I talked about.
You don't need it to have media of exchange, nor cooperation between individuals, nor specialization
Actually, the whole point of governments and legal systems (legitimate ones) is to encourage cooperation between individuals, so that's a bit of a weird comment. (Where do you think the legitimacy comes from?) And specialization trivially depends upon cooperation.
Yes, these things can exist to a smal...
Value is something that exists in a decision-making mind. Real value (as opposed to fictional value) can only derive from the causal influences of the thing being valued on the valuing agent. This is just a fact, I can't think of a way to make it clearer.
Maybe ponder this:
How could my quality of life be affected by something with no causal influence on me?
Why does it seem false?
If welfare of strangers is something you value, then it is not a net cost.
Yes, there is an old-fashioned definition of altruism that assumes the action must be non-self-serving, but this doesn't match common contemporary usage (terms like effective altruism and reciprocal altruism would be meaningless), doesn't match your usage, and is based on a gross misunderstanding of how morality comes about (if written about this misunderstanding here - see section 4, "Honesty as meta-virtue," for the most relevant part).
Under that...
The question is not one of your goals being 50% fulfilled
If I'm talking about a goal actually being 50% fulfilled, then it is.
"Risk avoidance" and "value" are not synonyms.
Really?
I consider risk to be the possibility of losing or not gaining (essentially the same) something of value. I don't know much about economics, but if somebody could help avoid that, would people be willing to pay for such a service?
If I'm terrified of spiders, then that is something that must be reflected in my utility function, right? My payoff from bei...
Apologies if my point wasn't clear.
If altruism entails a cost to the self, then your claim that altruism is all about values seems false. I assumed we are using similar enough definitions of altruism to understand each other.
We can treat the social contract as a belief, a fact, an obligation, or goodness knows what, but it won't affect my argument. If the social contract requires being nice to people, and if the social contract is useful, then there are often cases when being nice is rational.
Furthermore, being nice in a way the exposes me to undue risk i...
Point 1:
my goals may be fulfilled to some degree
If option 1 leads only to a goal being 50% fulfilled, and option 2 leads only to the same goal being 51% fulfilled, then there is a sub-goal that option 2 satisfies (ie 51% fulfillment) but option 1 doesn't, but not vice versa. Thus option 2 is better under any reasonable attitude. The payoff is the goal, by definition. The greater the payoff, the more goals are fulfilled.
The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals.
But risk is i...
I did mean after controlling for an ability to have impact
Strikes me as a bit like saying "once we forget about all the differences, everything is the same." Is there a valid purpose to this indifference principle?
Don't get me wrong, I can see that quasi-general principles of equality are worth establishing and defending, but here we are usually talking about something like equality in the eyes of the state, ie equality of all people, in the collective eyes of all people, which has a (different) sound basis.
I would call it a bias because it is irrational.
It (as I described it - my understanding of the terminology might not be standard) involves choosing an option that is not the one most likely to lead to one's goals being fulfilled (this is the definition of 'payoff', right?).
Or, as I understand it, risk aversion may amount to consistently identifying one alternative as better when there is no rational difference between them. This is also an irrational bias.
Rationality is about implementing your goals
That's what I meant.
An interesting claim :-) Want to unroll it?
Altruism is also about implementing your goals (via the agency of the social contract), so rationality and altruism (depending how you define it) are not orthogonal.
Lets define altruism as being nice to other people. Lets describe the social contract as a mutually held belief that being nice to other people improves society. If this belief is useful, then being nice to other people is useful, i.e furthers one's goals, i.e. it is rational. I kno...
Yes, non-rational (perhaps empathy-based) altruism is possible. This is connected to the point I made elsewhere that consequentialism does not axiomatically depend on others having value.
empathy is not [one level removed from terminal values]
Not sure what you mean here. Empathy may be a gazillion levels removed from the terminal level. Experiencing an emotion does not guarantee that that emotion is a faithful representation of a true value held. Otherwise "do exactly as you feel immediately inclined, at all times," would be all we needed to know about morality.
I see Sniffnoy also raised the same point.
I understood risk aversion to be a tendency to prefer a relatively certain payoff, to one that comes with a wider probability distribution, but has higher expectation. In which case, I would call it a bias.
A couple of points:
(1) You (and possibly others you refer to) seem to use the word 'consequentialism' to point to something more specific, e.g. classic utilitarianism, or some other variant. For example you say
[Yvain] argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value"
Actually, consequentialism follows independently of "others have non zero value." Hence, classic utilitarianism's axiomatic call to maximize the good for the greatest numb...
Thanks for taking the time to try to debunk some of the sillier aspects of classic utilitarianism. :)
‘Actual value’ exists only theoretically, even after the fact.
You've come close to an important point here, though I believe its expression needs to be refined. My conclusion is that value has real existence. This conclusion is primarily based on the personal experience of possessing real preferences, and my inference (to a high level of confidence) that other humans routinely do the same. We might reasonably doubt the a priori correspondence between ac...
If "X is good" was simply an empirical claim about whether an object conforms to a person's values, people would frequently say things like "if my values approved of X, then X would be good"....
If that is your basis for a scientific standard, then I'm afraid I must withdraw from this discussion.
Ditto, if this is your idea of humor.
what if "X is good" was a mathematical claim about the value of a thing according to whatever values the speaker actually holds?
That's just silly. What if c = 299,792,458 m/s is a mathematica...
I quite like Bob Trivers' self-deception theory, though I only have tangential acquaintance with it. We might anticipate that self deception is harder if we are inclined to recognize the bit we call "me" as caused by some inner mechanism, hence it may be profitable to suppress that recognition, if Trivers is on to something.
Wild speculation on my part, of course. There may simply be no good reason, from the point of view of historic genetic fitness, to be good at self analysis, and you're quite possibly on to something, that the computational overhead just doesn't pay off.
I'm not conflating anything. Those are different statements, and I've never implied otherwise.
The statement "X is good," which is a value judgement, is also an empirical claim, as was my initial point. Simply restating your denial of that point does not constitute an argument.
"X is good" is a claim about the true state of X, and its relationship to the values of the person making the claim. Since you agree that values derive from physical matter, you must (if you wish to be coherent) also accept that "X is good" is a claim abo...
I guess Lukeprog also believes that Lukeprog exists, and that this element of his world view is also not contrarian. So what?
One thing I see repeatedly in others is a deep-rooted reluctance to view themselves as blobs of perfectly standard physical matter. One of the many ways this manifests itself is a failure to consider inferences about one's own mind as fundamentally similar to any other form of inference. There seems to be an assumption of some kind on non-inferable magic, when many people think about their own motivations. I'm sure you appreciate how...
Are there "elements of" which don't contain value judgements?
That strikes me as a question for dictionary writers. If we agree that Newton's laws of motion constitute such an element, then clearly, there are such elements that do not not contain value judgements.
Is Alice's preference for cabernet part of Alice's world model?
iff she perceives that preference.
If Alice's preferences are part of Alice's world model, then Alice's world model is part of Alice's world model as well.
I'm not sure this follows by logical necessity, but how is t...
Alice is part of the world, right? So any belief about Alice is part of a world model. Any belief about Alice's preference for cabernet is part of a world model - specifically, the world model of who-ever holds that belief.
By any chance....?
Yes. (The phrase "the totality of" could, without any impact on our current discussion, be replaced with "elements of". )
Is there something wrong with that? I inferred that to also be the meaning of the original poster.
A value judgement both uses and mentions values.
The judgement is an inference about values. The inference derives from the fact that some value exist. (The existing value exerts a causal influence on one's inferences.)
This is how it is with all forms of inference.
Throwing a ball is not an inference (note that 'inference' and 'judgement' are synonyms), thus throwing a ball is no way necessarily part of a world model, and for our purposes, in no way analogous to making a value judgement.
I never said anything of the sort that Alice's values must necessarily be part of all world models that exist inside Alice's mind. (Note, though, that if we are talking about 'world model,' singular, as I was, then world model necessarily includes perception of some values.)
When I say that a value judgement is necessarily part of a world model, I mean that if I make a value judgement, then that judgement is necessarily part of my world model.
What levels am I confusing? Are you sure it's not you that is confused?
Your comment bears some resemblance to that of Lumifer. See my reply above.
whose world model?
Trivially, it is the world model of the person making the value judgement I'm talking about. I'm trying hard, but I'm afraid I really don't understand the point of your comment.
If I make a judgement of value, I'm making an inference about an arrangement of matter (mostly in my brain), which (inference) is therefore part of my world model. This can't be otherwise.
Furthermore, any entity capable of modeling some aspect of reality must be, by definition, capable of isolating salient phenomena, which amounts to making value judgements. Th...
A minor point in relation to this topic, but an important point, generally:
It seems to be more of a contrarian value judgment than a contrarian world model
Correct me if I'm wrong, but isn't a value judgement necessarily part of a world model? You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you.
Many tell me (effectively) that what I've just expressed is a contrarian view. Certainly, for many years I would have happily agreed with the non-overlapping-ness of value judgements and world views....
Thanks, I'll take a look at the article.
If you don't mind, when you say "definitely not clear," do you mean that you are not certain about this point, or that you are confident, but it's complicated to explain?
I'm not sure that's what Jaynes meant by correspondence with common sense. To me, it's more reminiscent of his consistency requirements, but I don't think it is identical to any of them.
Certainly, it is desirable that logically equivalent statements receive the same probability assignment, but I'm not aware that the derivation of Cox's theorems collapses without this assumption.
Jaynes says, "the robot always represents equivalent states of knowledge by equivalent plausibility assignments." The problem, of course, is knowing that 2 statements ar...
Thanks for taking the time to elaborate.
I don't recall that desideratum in Jaynes' derivations. I think it is not needed. Why should it be needed? Certainty about axioms is a million miles from certainty about all their consequences, as seems to be the exact point of your series.
Help me out, what am I not understanding?
Maybe I'm just thick, but I'm not at all convinced by your claim that probabilistic reasoning about potential mathematical theorems violates any desiderata.
I re-read the post you linked to in the first line, but am still not satisfied. Could you be a bit more specific? Which desideratum? And how violated?
Perhaps it will help you explain, if I describe how I see things.
Since mathematical symbols are nothing more than (for example) marks on a bit of paper, there is no sense in which strings of such symbols have any independent truth (beyond the fact that th...
Jonah was looking at probability distributions over estimates of an unknown probability
What is an unknown probability? Forming a probability distribution means rationally assigning degrees of belief to a set of hypotheses. The very act of rational assignment entails that you know what it is.
Thanks, I was half getting the point, but is this really important, as you say? If my goal is to gain value by assessing whether or not your proposition is true, why would this matter?
If the goal is to learn something about the person you are arguing with (maybe not as uncommon as I'm inclined to think?), then certainly, care must be taken. I suppose the procedure should be to form a hypothesis of the type "Y was stated in an inefficient attempt to express Z," where Z constitutes possible evidence for X, and to examine the plausibility of that hypothesis.
Not sure if I properly understood the original post - apologies if I'm just restating points already made, but I see it like this.
Whatever it consists of, it's pretty much the definition of rationality that it increases expected utility. Assuming that the intermediate objective of a rationalist technique like steelmanning is to bring us closer to the truth, then there are 2 trivial cases where steelmanning is not rational:
(1) When the truth has low utility. (If a lion starts chasing me, I will temporarily abandon my attempt to find periodicity in the digit...
A few terminological headaches in this post. Sorry for the negative tone.
There is talk of a "fixed but unknown probability," which should always set alarm bells ringing.
More generally, I propose that whenever one assigns a probability to some parameter, that parameter is guaranteed not to be a probability.
I am also disturbed by the mention of Knightian uncertainty, descried as "uncertainty that can't be usefully modeled in terms of probability." Now there's a charitable interpretation of that phrase, and I can see that there may be a ps...
Nice discussion of game theory in politics. Is there any theoretical basis for expecting the line-item veto generally to be more harmful than beneficial to the president?
(Not an attempt to belittle the above fascinating example, but genuine interest in any related, more general results of the theory.)
Perhaps some explanation is in order. (I thought it was quite a witty thought experiment, but apparently it's not appreciated.)
If it is in principle impossible to explain why one ought to do something, then what is the function of the word "ought"? Straightforwardly, it can have none, and we gain nothing by its existence in our vocabulary.
Alternatively, if it is not in principle impossible, then trivially the condition 'ought' (the condition of oughting?) rests entirely upon real facts about the universe, and the position of Randaly is false.
I...
Thanks for bringing that article to my attention.
You explain how you learned skills of instrumental rationality from debating, but in doing so, you also learned reliable answers to questions of fact about the universe: how to win debates. When I'm learning electrostatics I learn that charges come with different polarities. If I later learn about gravity, and that gravitationally everything attracts, this doesn't make the electrostatics wrong! Similarly your debating skills were not wrong, just not the same skills you needed for writing research papers.
Reg...
The terminology is a bit new to me, but it seems to me epistemic and instrumental rationality are necessarily identical.
If epistemic rationality is implementation of any of a set of reliable procedures for making true statements about reality, and instrumental rationality is use of any of a set of reliable procedures for achieving goals, then the latter is contained in the former, since reliably achieving goals entails possession of some kind of high-fidelity model of reality.
Furthermore, what kind of rationality does not pursue goals? If I have no interest in chess, and ability to play chess will have no impact on any of my present or future goals, then it would seem to be irrational of me to learn to play chess.
Let's define an action as instrumentally rational if it brings you closer to your goal.
Suppose my goal is to get rich. Suppose, on a whim, I walk into a casino and put a large amount of money on number 12 in a single game of roulette. Suppose number 12 comes up. Was that rational?
Same objection applies to your definition of epistemicaly rational actions.
Thanks for the welcome.
I'm in Houston.
As a thought experiment, this is interesting, and I’m sure informative, but there is one crucial thing that this post neglects to examine: whether the inscription under the hood actually reads “humanity maximizer.” The impression from the post is that this is already established.
But has anybody established, or even stopped to consider whether avoiding the loss of 10^46 potential lives per century is really what we value? If so, I see no evidence of it here. I see no reason to even suspect that enabling that many lives in the distant future has any remotely...
I haven't had much explicit interaction with these inside/outside view concepts, and maybe I'm misunderstanding the terminology, but a couple of the examples of outside views given struck me as more like inside views: Yelp reviews and the advice of a friend are calibrated instruments being used to measure the performance of a restaurant, ie to build a model of its internal workings.
But then almost immediately, I thought, "hey, even the inside view is an outside view." Every model is an analogy, e.g. an analogy in the sense of this thing A is a bi...
Hi folks
I am Tom. Allow me to introduce myself, my perception of rationality, and my goals as a rationalist. I hope what follows is not too long and boring.
I am a physicist, currently a post-doc in Texas, working on x-ray imaging. I have been interested in science for longer than I have known that 'science' is a word. I went for physics because, well, everything is physics, but I sometimes marvel that I didn't go for biology, because I have always felt that evolution by natural selection is more beautiful than any theory of 'physics' (of course, really it ...
I think that the communication goals of the OP were not to tell us something about a hand of cards, but rather to demonstrate that certain forms of misunderstanding are common, and that this maybe tells us something about the way our brains work.
The problem quoted unambiguously precludes the possibility of an ace, yet many of us seem to incorrectly assume that the statement is equivalent to something like, 'One of the following describes the criterion used to select a hand of cards.....,' under which, an ace is likely. The interesting question is, why?
In order to see the question as interesting, though, I first have to see the effect as real.