All of tom_cr's Comments + Replies

tom_cr00

I think that the communication goals of the OP were not to tell us something about a hand of cards, but rather to demonstrate that certain forms of misunderstanding are common, and that this maybe tells us something about the way our brains work.

The problem quoted unambiguously precludes the possibility of an ace, yet many of us seem to incorrectly assume that the statement is equivalent to something like, 'One of the following describes the criterion used to select a hand of cards.....,' under which, an ace is likely. The interesting question is, why?

In order to see the question as interesting, though, I first have to see the effect as real.

tom_cr-20

If you assume.... [y]ou are, in effect, stipulating that that outcome actually has a lower utility than it's stated to have.

Thanks, that focuses the argument for me a bit.

So if we assume those curves represent actual utility functions, he seems to be saying that the shape of curve B, relative to A makes A better (because A is bounded in how bad it could be, but unbounded in how good it could be). But since the curves are supposed to quantify betterness, I am attracted to the conclusion that curve B hasn't been correctly drawn. If B is worse than A, how ... (read more)

2Said Achmiz
A point of terminology: "utility function" usually refers to a function that maps things (in our case, outcomes) to utilities. (Some dimension, or else some set, of things on the x-axis; utility on the y-axis.) Here, we instead are mapping utility to frequency, or more precisely, outcomes (arranged — ranked and grouped — along the x-axis by their utility) to the frequency (or, equivalently, probability) of the outcomes' occurrence. (Utility on the x-axis, frequency on the y-axis.) The term for this sort of graph is "distribution" (or more fully, "frequency [or probability] distribution over utility of outcomes"). To the rest of your comment, I'm afraid I will have to postpone my full reply; but off the top of my head, I suspect the conceptual mismatch here stems from saying that the curves are meant to "quantify betterness". It seems to me (again, from only brief consideration) that this is a confused notion. I think your best bet would be to try taking the curves as literally as possible, attempting no reformulation on any basis of what you think they are "supposed" to say, and proceed from there. I will reply more fully when I have time.
tom_cr00

Sure, I used that as what I take to be the case where the argument would be most easily recognized as valid.

One generalization might be something like, "losing makes it harder to continue playing competitively." But if it becomes harder to play, then I have lost something useful, i.e. my stock of utility has gone down, perhaps by an amount not reflected in the inferred utility functions. My feeling is that this must be the case, by definition (if the assumed functions have the same expectation), but I'll continue to ponder.

The problem feels related to Pascal's wager - how to deal with the low-probability disaster.

2Said Achmiz
I really do want to emphasize that if you assume that "losing" (i.e. encountering an outcome with a utility value on the low end of the scale) has some additional effects, whether that be "losing takes you out of the game", or "losing makes it harder to keep playing", or whatever, then you are modifying the scenario, in a critical way. You are, in effect, stipulating that that outcome actually has a lower utility than it's stated to have. I want to urge you to take those graphs literally, with the x-axis being Utility, not money, or "utility but without taking into account secondary effects", or anything like that. Whatever the actual utility of an outcome is, after everything is accounted for — that's what determines that outcome's position on the graph's x-axis. (Edit: And it's crucial that the expectation of the two distributions is the same. If you find yourself concluding that the expectations are actually different, then you are misinterpreting the graphs, and should re-examine your assumptions; or else suitably modify the graphs to match your assumptions, such that the expectations are the same, and then re-evaluate.) This is not a Pascal's Wager argument. The low-utility outcomes aren't assumed to be "infinitely" bad, or somehow massively, disproportionately, unrealistically bad; they're just... bad. (I don't want to get into the realm of offering up examples of bad things, because people's lives are different and personal value scales are not absolute, but I hope that I've been able to clarify things at least a bit.)
tom_cr00

Thanks very much for the taking the time to explain this.

It seems like the argument (very crudely) is that, "if I lose this game, that's it, I won't get a chance to play again, which makes this game a bad option." If so, again, I wonder if our measure of utility has been properly calibrated.

It seems to me like the expected utility of option B, where I might get kicked out of the game, is lower than the expected utility of option A, where this is impossible. Your example of insurance may not be a good one, as one insures against financial loss, b... (read more)

0Said Achmiz
Just a brief comment: the argument is not predicated on being "kicked out" of the game. We're not assuming that even the lowest-utility outcomes cause you to no longer be able to continue "playing". We're merely saying that they are significantly worse than average.
tom_cr-10

I think that international relations is a simple extension of social-contract-like considerations.

If nations cooperate, it is because it is believed to be in their interest to do so. Social-contract-like considerations form the basis for that belief. (The social contract is simply that which makes it useful to cooperate.) "Clearly isn't responsible for," is a phrase you should be careful before using.

You seem to be suggesting that [government] enables [cooperation]

I guess you mean that I'm saying cooperation is impossible without government. ... (read more)

2Nornagest
The social contract, according to Hobbes and its later proponents, is the implicit deal that citizens (and, at a logical extension, other subordinate entities) make with their governments, trading off some of their freedom of action for greater security and potentially the maintenance of certain rights. That implies some higher authority with compelling powers of enforcement, and there's no such thing in international relations; it's been described (indeed, by Hobbes himself) as a formalized anarchy. Using the phrase to describe the motives for cooperation in such a state extends it far beyond its original sense, and IMO beyond usefulness. There are however other reasons to cooperate: status, self-enforced codes of ethics, enlightened self-interest. It's these that dominate in international relations, which is why I brought that up.
tom_cr-10

Values start to have costs only when they are realized or implemented.

How? Are you saying that I might hold legitimate value in something, but be worse off if I get it?

Costlessly increasing the welfare of strangers doesn't sound like altruism to me.

OK, so we are having a dictionary writers' dispute - one I don't especially care to continue. So every place I used 'altruism,' substitute 'being decent' or 'being a good egg,' or whatever. (Please check, though, that your usage is somewhat consistent.)

But your initial claim (the one that I initially challenged) was that rationality has nothing to do with value, and is manifestly false.

0Lumifer
I don't think we understand each other. We start from different points, ascribe different meaning to the same words, and think in different frameworks. I think you're much confused and no doubt you think the same of me.
tom_cr-10

If you look closely, I think you should find that legitimacy of government & legal systems comes from the same mechanism as everything I talked about.

You don't need it to have media of exchange, nor cooperation between individuals, nor specialization

Actually, the whole point of governments and legal systems (legitimate ones) is to encourage cooperation between individuals, so that's a bit of a weird comment. (Where do you think the legitimacy comes from?) And specialization trivially depends upon cooperation.

Yes, these things can exist to a smal... (read more)

1Nornagest
I have my quibbles with the social contract theory of government, but my main objection here isn't to the theory itself, but that you're attributing features to it that it clearly isn't responsible for. You don't need post-apocalyptic chaos to find situations that social contracts don't cover: for example, there is no social contract on the international stage (pre-superpower, if you'd prefer), but nations still specialize and make alliances and transfer value. The point of government (and therefore the social contract, if you buy that theory of legitimacy) is to facilitate cooperation. You seem to be suggesting that it enables it, which is a different and much stronger claim.
tom_cr-40

Value is something that exists in a decision-making mind. Real value (as opposed to fictional value) can only derive from the causal influences of the thing being valued on the valuing agent. This is just a fact, I can't think of a way to make it clearer.

Maybe ponder this:

How could my quality of life be affected by something with no causal influence on me?

tom_cr-10

Why does it seem false?

If welfare of strangers is something you value, then it is not a net cost.

Yes, there is an old-fashioned definition of altruism that assumes the action must be non-self-serving, but this doesn't match common contemporary usage (terms like effective altruism and reciprocal altruism would be meaningless), doesn't match your usage, and is based on a gross misunderstanding of how morality comes about (if written about this misunderstanding here - see section 4, "Honesty as meta-virtue," for the most relevant part).

Under that... (read more)

-1Nornagest
Either you're using a broader definition of the social contract than I'm familiar with, or you're giving it too much credit. The model I know with provides (one mechanism for) the legitimacy of a government or legal system, and therefore of the legal rights it establishes including an expectation of enforcement; but you don't need it to have media of exchange, nor cooperation between individuals, nor specialization. At most it might make these more scalable. And of course there are models that deny the existence of a social contract entirely, but that's a little off topic.
0Lumifer
Having a particular value cannot have a cost. Values start to have costs only when they are realized or implemented. Costlessly increasing the welfare of strangers doesn't sound like altruism to me. Let's say we start telling people "Say yes and magically a hundred lives will be saved in Chad. Nothing is required of you but to say 'yes'." How many people will say "yes"? I bet almost everyone. And we will be suspicious of those who do not -- they would look like sociopaths to us. That doesn't mean that we should call everyone but sociopaths is an altruist -- you can, of course, define altruism that way but at this point the concept becomes diluted into meaninglessness. We continue to have major disagreements about the social contract, but that's a big discussion that should probably go off into a separate thread if you want to pursue it.
tom_cr00

The question is not one of your goals being 50% fulfilled

If I'm talking about a goal actually being 50% fulfilled, then it is.

"Risk avoidance" and "value" are not synonyms.

Really?

I consider risk to be the possibility of losing or not gaining (essentially the same) something of value. I don't know much about economics, but if somebody could help avoid that, would people be willing to pay for such a service?

If I'm terrified of spiders, then that is something that must be reflected in my utility function, right? My payoff from bei... (read more)

tom_cr00

Apologies if my point wasn't clear.

If altruism entails a cost to the self, then your claim that altruism is all about values seems false. I assumed we are using similar enough definitions of altruism to understand each other.

We can treat the social contract as a belief, a fact, an obligation, or goodness knows what, but it won't affect my argument. If the social contract requires being nice to people, and if the social contract is useful, then there are often cases when being nice is rational.

Furthermore, being nice in a way the exposes me to undue risk i... (read more)

-1Lumifer
Why does it seem false? It is about values, in particular the relationship between the value "welfare of strangers" and the value "resources I have". It does not. The social contract requires you not to infringe upon the rights of other people and that's a different thing. Maybe you can treat it as requiring being polite to people. I don't see it as requiring being nice to people. I think we have a pretty major disagreement about that :-/
tom_cr10

Point 1:

my goals may be fulfilled to some degree

If option 1 leads only to a goal being 50% fulfilled, and option 2 leads only to the same goal being 51% fulfilled, then there is a sub-goal that option 2 satisfies (ie 51% fulfillment) but option 1 doesn't, but not vice versa. Thus option 2 is better under any reasonable attitude. The payoff is the goal, by definition. The greater the payoff, the more goals are fulfilled.

The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals.

But risk is i... (read more)

-1Said Achmiz
Dawes' argument, as promised. The context is: Dawes is explaining von Neumann and Morgenstern's axioms. ---------------------------------------- Aside: I don't know how familiar you are with the VNM utility theorem, but just in case, here's a brief primer. The VNM utility theorem presents a set of axioms, and then says that if an agent's preferences satisfy these axioms, then we can assign any outcome a number, called its utility, written as U(x); and it will then be the case that given any two alternatives X and Y, the agent will prefer X to Y if and only if E(U(X)) > E(U(Y)). (The notation E(x) is read as "the expected value of x".) That is to say, the agent's preferences can be understood as assigning utility values to outcomes, and then preferring to have more (expected) utility rather than less (that is, preferring those alternatives which are expected to result in greater utility). In other words, if you are an agent whose preferences adhere to the VNM axioms, then maximizing your utility will always, without exception, result in satisfying your preferences. And in yet other words, if you are such an agent, then your preferences can be understood to boil down to wanting more utility; you assign various utility values to various outcomes, and your goal is to have as much utility as possible. (Of course this need not be anything like a conscious goal; the theorem only says that a VNM-satisfying agent's preferences are equivalent to, or able to be represented as, such a utility formulation, not that the agent consciously thinks of things in terms of utility.) (Dawes presents the axioms in terms alternatives or gambles; a formulation of the axioms directly in terms of the consequences is exactly equivalent, but not quite as elegant.) N.B.: "Alternatives" in this usage are gambles, of the form ApB: you receive outcome A with probability p, and otherwise (i.e. with probability 1–p) you receive outcome B. (For example, your choice might be between two alternat
0Said Achmiz
Re: your response to point 1: again, the options in question are probability distributions over outcomes. The question is not one of your goals being 50% fulfilled or 51% fulfilled, but, e.g., a 51% probability of your goals being 100% fulfilled vs., a 95% probability of your goals being 50% fulfilled. (Numbers not significant; only intended for illustrative purposes.) "Risk avoidance" and "value" are not synonyms. I don't know why you would say that. I suspect one or both of us is seriously misunderstanding the other. Re: point #2: I don't have the time right now, but sometime over the next couple of days I should have some time and then I'll gladly outline Dawes' argument for you. (I'll post a sibling comment.)
tom_cr00

I did mean after controlling for an ability to have impact

Strikes me as a bit like saying "once we forget about all the differences, everything is the same." Is there a valid purpose to this indifference principle?

Don't get me wrong, I can see that quasi-general principles of equality are worth establishing and defending, but here we are usually talking about something like equality in the eyes of the state, ie equality of all people, in the collective eyes of all people, which has a (different) sound basis.

5nshepperd
If you actually did some kind of expected value calculation, with your utility function set to something like U(thing) = u(thing) / causal-distance(thing), you would end up double-counting "ability to have an impact", because there is already a 1/causal-distance sort of factor in E(U|action) = sum { U(thing') P(thing' | action) } built into how much each action affects the probabilities of the different outcomes (which is basically what "ability to have an impact" is). That's assuming that what JonahSinick meant by "ability to have an impact" was the impact of the agent upon the thing being valued. But it sounds like you might have been talking about the effect of thing upon the agent? As if all you can value about something is any observable effect that thing can have on yourself (which is not an uncontroversial opinion)?
0JonahS
Note that I wasn't arguing that it's rational. See the quotation in this comment. Rather, I was describing an input into effective altruist thinking.
tom_cr00

I would call it a bias because it is irrational.

It (as I described it - my understanding of the terminology might not be standard) involves choosing an option that is not the one most likely to lead to one's goals being fulfilled (this is the definition of 'payoff', right?).

Or, as I understand it, risk aversion may amount to consistently identifying one alternative as better when there is no rational difference between them. This is also an irrational bias.

0Said Achmiz
Problems with your position: 1. "goals being fulfilled" is a qualitative criterion, or perhaps a binary one. The payoffs at stake in scenarios where we talk about risk aversion are quantitative and continuous. Given two options, of which I prefer the one with lower risk but a lower expected value, my goals may be fulfilled to some degree in both case. The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals. 2. The alternatives at stake are probabilistic scenarios, i.e. each alternative is some probability distribution over some set of outcomes. The expectation of a distribution is not the only feature that differentiates distributions from each other; the form of the distribution may also be relevant. Taking risk aversion to be irrational means that you think the form of a probability distribution is irrelevant. This is not an obviously correct claim. In fact, in Rational Choice in an Uncertain World [1], Robyn Dawes argues that the form of a probability distribution over outcomes is not irrelevant, and that it's not inherently irrational to prefer some distributions over others with the same expectation. It stands to reason (although Dawes doesn't seem to come out and say this outright, he heavily implies it) that it may also be rational to prefer one distribution to another with a lower (Edit: of course I meant "higher", whoops) expectation. [1] pp. 159-161 in the 1988 edition, if anyone's curious enough to look this up. Extra bonus: This section of the book (chapter 8, "Subjective Expected Utility Theory", where Dawes explains VNM utility) doubles as an explanation of why my preferences do not adhere to the von Neumann-Morgenstern axioms.
tom_cr00

Rationality is about implementing your goals

That's what I meant.

An interesting claim :-) Want to unroll it?

Altruism is also about implementing your goals (via the agency of the social contract), so rationality and altruism (depending how you define it) are not orthogonal.

Lets define altruism as being nice to other people. Lets describe the social contract as a mutually held belief that being nice to other people improves society. If this belief is useful, then being nice to other people is useful, i.e furthers one's goals, i.e. it is rational. I kno... (read more)

3Lumifer
Let's define things the way they are generally understood or at least close to it. You didn't make your point. I understand altruism, generally speaking, as valuing the welfare of strangers so that you're willing to attempt to increase it at some cost to yourself. I understand social contract as a contract, a set of mutual obligations (in particular, it's not a belief).
tom_cr-10

Yes, non-rational (perhaps empathy-based) altruism is possible. This is connected to the point I made elsewhere that consequentialism does not axiomatically depend on others having value.

empathy is not [one level removed from terminal values]

Not sure what you mean here. Empathy may be a gazillion levels removed from the terminal level. Experiencing an emotion does not guarantee that that emotion is a faithful representation of a true value held. Otherwise "do exactly as you feel immediately inclined, at all times," would be all we needed to know about morality.

tom_cr-20

I understood risk aversion to be a tendency to prefer a relatively certain payoff, to one that comes with a wider probability distribution, but has higher expectation. In which case, I would call it a bias.

0Said Achmiz
It's not a bias, it's a preference. Insofar as we reserve the term bias for irrational "preferences" or tendencies or behaviors, risk aversion does not qualify.
2Lumifer
Yes. Any goals. No. Rationality is about implementing your values, whatever they happen to be. An interesting claim :-) Want to unroll it?
3Said Achmiz
Basing altruism on contractarianism is very different from basing altruism on empathy. For one thing, the results may be different (one might reasonably conclude that we, here in the United States or wherever, have no implicit social contract with the residents of e.g. Nigeria). For another, it's one level removed from terminal values, whereas empathy is not, so it's a different sort of reasoning, and not easily comparable. (btw, I also think there's a basic misunderstanding happening here, but I'll let Lumifer address it, if he likes.)
tom_cr50

A couple of points:

(1) You (and possibly others you refer to) seem to use the word 'consequentialism' to point to something more specific, e.g. classic utilitarianism, or some other variant. For example you say

[Yvain] argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value"

Actually, consequentialism follows independently of "others have non zero value." Hence, classic utilitarianism's axiomatic call to maximize the good for the greatest numb... (read more)

0JonahS
I didn't quite have in mind classical utilitarianism in mind. I had in mind principles like * Not helping somebody is equivalent to hurting the person * An action that doesn't help or hurt someone doesn't have moral value. I did mean after controlling for ability to have an impact.
0Said Achmiz
Thank you for bringing this up. I've found myself having to point out this distinction (between consequentialism and utilitarianism) a number of times; it seems a commonplace confusion around here.
tom_cr10

Thanks for taking the time to try to debunk some of the sillier aspects of classic utilitarianism. :)

‘Actual value’ exists only theoretically, even after the fact.

You've come close to an important point here, though I believe its expression needs to be refined. My conclusion is that value has real existence. This conclusion is primarily based on the personal experience of possessing real preferences, and my inference (to a high level of confidence) that other humans routinely do the same. We might reasonably doubt the a priori correspondence between ac... (read more)

tom_cr00

If "X is good" was simply an empirical claim about whether an object conforms to a person's values, people would frequently say things like "if my values approved of X, then X would be good"....

If that is your basis for a scientific standard, then I'm afraid I must withdraw from this discussion.

Ditto, if this is your idea of humor.

what if "X is good" was a mathematical claim about the value of a thing according to whatever values the speaker actually holds?

That's just silly. What if c = 299,792,458 m/s is a mathematica... (read more)

tom_cr00

I quite like Bob Trivers' self-deception theory, though I only have tangential acquaintance with it. We might anticipate that self deception is harder if we are inclined to recognize the bit we call "me" as caused by some inner mechanism, hence it may be profitable to suppress that recognition, if Trivers is on to something.

Wild speculation on my part, of course. There may simply be no good reason, from the point of view of historic genetic fitness, to be good at self analysis, and you're quite possibly on to something, that the computational overhead just doesn't pay off.

tom_cr00

I'm not conflating anything. Those are different statements, and I've never implied otherwise.

The statement "X is good," which is a value judgement, is also an empirical claim, as was my initial point. Simply restating your denial of that point does not constitute an argument.

"X is good" is a claim about the true state of X, and its relationship to the values of the person making the claim. Since you agree that values derive from physical matter, you must (if you wish to be coherent) also accept that "X is good" is a claim abo... (read more)

1nshepperd
If "X is good" was simply an empirical claim about whether an object conforms to a person's values, people would frequently say things like "if my values approved of X, then X would be good" and would not say things like "taking a murder pill doesn't affect the fact that murder is bad". Alternative: what if "X is good" was a mathematical claim about the value of a thing according to whatever values the speaker actually holds?
tom_cr00

I guess Lukeprog also believes that Lukeprog exists, and that this element of his world view is also not contrarian. So what?

One thing I see repeatedly in others is a deep-rooted reluctance to view themselves as blobs of perfectly standard physical matter. One of the many ways this manifests itself is a failure to consider inferences about one's own mind as fundamentally similar to any other form of inference. There seems to be an assumption of some kind on non-inferable magic, when many people think about their own motivations. I'm sure you appreciate how... (read more)

2nshepperd
That's not at all what I meant. Obviously minds and brains are just blobs of matter. You are conflating the claims "lukeprog thinks X is good" and "X is good". One is an empirical claim, one is a value judgement. More to the point, when someone says "P is a contrarian value judgement, not a contrarian world model", they obviously intend "world model" to encompass empirical claims and not value judgements.
0Strange7
My theory is that the dualistic theory of mind is an artifact of the lossy compression algorithm which, conveniently, prevents introspection from turning into infinite recursion. Lack of neurosurgery in the environment of ancestral adaptation made that an acceptable compromise.
tom_cr10

Are there "elements of" which don't contain value judgements?

That strikes me as a question for dictionary writers. If we agree that Newton's laws of motion constitute such an element, then clearly, there are such elements that do not not contain value judgements.

Is Alice's preference for cabernet part of Alice's world model?

iff she perceives that preference.

If Alice's preferences are part of Alice's world model, then Alice's world model is part of Alice's world model as well.

I'm not sure this follows by logical necessity, but how is t... (read more)

tom_cr10

Alice is part of the world, right? So any belief about Alice is part of a world model. Any belief about Alice's preference for cabernet is part of a world model - specifically, the world model of who-ever holds that belief.

By any chance....?

Yes. (The phrase "the totality of" could, without any impact on our current discussion, be replaced with "elements of". )

Is there something wrong with that? I inferred that to also be the meaning of the original poster.

0Lumifer
Not "whoever", we are talking specifically about Alice. Is Alice's preference for cabernet part of Alice's world model? I have a feeling we're getting into the snake-eating-its-own-tail loops. If Alice's preferences are part of Alice's world model then Alice's world model is part of Alice's world model as well. Recurse until you're are reduced to praying to the Holy Trinity of Godel, Escher, and Bach :-) Could it? You are saying that value judgments must be a part of. Are there "elements of" which do not contain value judgements?
tom_cr00

A value judgement both uses and mentions values.

The judgement is an inference about values. The inference derives from the fact that some value exist. (The existing value exerts a causal influence on one's inferences.)

This is how it is with all forms of inference.

Throwing a ball is not an inference (note that 'inference' and 'judgement' are synonyms), thus throwing a ball is no way necessarily part of a world model, and for our purposes, in no way analogous to making a value judgement.

0nshepperd
Here is a quote from the article: Lukeprog thinks that effective altruism is good, and this is a value judgement. Obviously, most of mainstream society doesn't agree—people prefer to give money to warm fuzzy causes, like "adopt an endangered panda". So that value judgement is certainly contrarian. Presumably, lukeprog also believes that "lukeprog thinks effective altruism is good". This is a fact in his world model. However, most people would agree with him when asked if that is true. We can see that lukeprog likes effective altruism. There's no reason for anyone to claim "no, he doesn't think that" when he obviously does. So this element of his world model is not contrarian.
tom_cr00

I never said anything of the sort that Alice's values must necessarily be part of all world models that exist inside Alice's mind. (Note, though, that if we are talking about 'world model,' singular, as I was, then world model necessarily includes perception of some values.)

When I say that a value judgement is necessarily part of a world model, I mean that if I make a value judgement, then that judgement is necessarily part of my world model.

0Lumifer
So, Alice likes cabernet and dislikes merlot. Alice says "I value cabernet more than merlot". This is a value judgement. How is it a part of Alice's world model and which world model? By any chance, are you calling "a world model" the totality of a person's ideas, perceptions, representations, etc. of external reality?
tom_cr-20

What levels am I confusing? Are you sure it's not you that is confused?

Your comment bears some resemblance to that of Lumifer. See my reply above.

0nshepperd
To put it simply, what I am saying is that a value judgement is about whatever it is you are in fact judging. While a factual assertion such as you would find in a "model of the world" is about the physical configuration of your brain. This is similar to the use/mention distinction in linguistics. When you make a value judgement you use your values. A model of your brain mentions them. An argument like this could be equally well applied to claim that the act of throwing a ball is necessarily part of a world model, because your arm is physical. In fact, they are completely different things (for one thing, simply applying a model will never result in the ball moving), even though a world model may well describe the throwing of a ball.
tom_cr00

whose world model?

Trivially, it is the world model of the person making the value judgement I'm talking about. I'm trying hard, but I'm afraid I really don't understand the point of your comment.

If I make a judgement of value, I'm making an inference about an arrangement of matter (mostly in my brain), which (inference) is therefore part of my world model. This can't be otherwise.

Furthermore, any entity capable of modeling some aspect of reality must be, by definition, capable of isolating salient phenomena, which amounts to making value judgements. Th... (read more)

2Lumifer
I don't think we understand each other. Let me try to unroll. A model (of the kind we are talking about) is some representation of reality. It exists in a mind. Let's take Alice. Alice holds an apple in her hand. Alice believes that if she lets go of the apple it will fall to the ground. This is an example of a simple world model that exists inside Alice's mind: basically, that there is such a thing as gravity and that it pulls objects towards the ground. You said "isn't a value judgement necessarily part of a world model?" I don't see a value judgement in this particular world model inside Alice's mind. You also said "You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you." That is a claim about how Alice's values came to be. But I don't see why Alice's values must necessarily be part of all world models that exists inside Alice's mind.
tom_cr00

A minor point in relation to this topic, but an important point, generally:

It seems to be more of a contrarian value judgment than a contrarian world model

Correct me if I'm wrong, but isn't a value judgement necessarily part of a world model? You are a physical object, and your values necessarily derive from the arrangement of the matter that composes you.

Many tell me (effectively) that what I've just expressed is a contrarian view. Certainly, for many years I would have happily agreed with the non-overlapping-ness of value judgements and world views.... (read more)

-2Lumifer
The issue is, whose world model? Your world model does not necessarily include values even if they were to be deterministically derived from "the arrangement of the matter". The map is not the territory. Models are imperfect and many different models can be build on the basis of the same reality.
0nshepperd
That's confusing levels. A world model that makes some factual assertions, some of which imply "my values are X" is a distinct thing from your values actually being X. To begin with, it's entirely possible for your world model to imply that "my values are X" when your values are actually Y, in which case your world model is wrong.
tom_cr00

Thanks, I'll take a look at the article.

If you don't mind, when you say "definitely not clear," do you mean that you are not certain about this point, or that you are confident, but it's complicated to explain?

0Manfred
i mean that I'm not very sure where that correspondence comes up in Jaynes, but Jaynes is being less explicit than other derivations, which I am more confident about.
tom_cr00

I'm not sure that's what Jaynes meant by correspondence with common sense. To me, it's more reminiscent of his consistency requirements, but I don't think it is identical to any of them.

Certainly, it is desirable that logically equivalent statements receive the same probability assignment, but I'm not aware that the derivation of Cox's theorems collapses without this assumption.

Jaynes says, "the robot always represents equivalent states of knowledge by equivalent plausibility assignments." The problem, of course, is knowing that 2 statements ar... (read more)

0Manfred
It's definitely not clear, I'll admit. And you're right, it is also a sort of consistency requirement. Fortunately, I can direct you to section 5 of a more explicit derivation here.
tom_cr00

Thanks for taking the time to elaborate.

I don't recall that desideratum in Jaynes' derivations. I think it is not needed. Why should it be needed? Certainty about axioms is a million miles from certainty about all their consequences, as seems to be the exact point of your series.

Help me out, what am I not understanding?

0Manfred
In Jaynes, this is sort of hidden in desideratum 2, "correspondence with common sense." The key part is that if two statements are logically equivalent, Cox's theorem is required to assign them the same probability. Since the axioms of arithmetic and "the axioms of arithmetic, and also 298+587=885," are logically equivalent, they should be assigned the same probability. I'm not sure how to help you well beyond that, my pedagogy is weak here.
tom_cr00

Maybe I'm just thick, but I'm not at all convinced by your claim that probabilistic reasoning about potential mathematical theorems violates any desiderata.

I re-read the post you linked to in the first line, but am still not satisfied. Could you be a bit more specific? Which desideratum? And how violated?

Perhaps it will help you explain, if I describe how I see things.

Since mathematical symbols are nothing more than (for example) marks on a bit of paper, there is no sense in which strings of such symbols have any independent truth (beyond the fact that th... (read more)

0Manfred
Cox's theorem literally has as a desideratum that the results should be identical to classical logic when you're completely certain about the axioms. This is what's violated. I'll try illustrating this with an example. Suppose we have a calculator that wants to add 298+587. If it can only take small steps, it just has to start with 298, and then do 298+1=299 (keeping track that this is step 1), 299+1=300 (step 2), 300+1=301 (that's 3), etc, until it reaches 884+1=885 (step 587), at which point the calculator put's "885" on the screen as its last step. And so our calculator, when you input 298+587, outputs 885. In order to add two big numbers, our calculator only ever had to add 1 to things - the stored number and the step count - which is pretty cool. The output follow inexorably follows from the rules of this calculator. When the first input was 298 and the step count is 587, the stored number is 885, and when the step count is equal to the second input, the calculator puts the stored number on the screen. We cannot get a different answer without using a calculator that follows different rules. Now suppose we build a calculator that does the same thing but with different labels. We ask it for P(298+587=885 | standard axioms). So it start with something it knows - P(298=298)=1. Then it moves to P(298+1=299)=1. Then P(298+2=300)=1. Eventually it reaches P(298+587=885)=1, so it outputs "1." This is just a different label on our normal calculator. Every step it increments two numbers, one on the left and one on the fight, and then it stops when the incremented number on the left reaches 587. It's the same process. This is the translation that obey's Cox's theorem. It's the same process as classical logic, because when your'e certain about axioms probabilistic logic and classical logic are equivalent. Logical uncertainty is like saying that P(298+587=885 | standard axioms) = 0.5. This violates Cox's theorem because it doesn't agree with our "re-labeled calculator
tom_cr30

Jonah was looking at probability distributions over estimates of an unknown probability

What is an unknown probability? Forming a probability distribution means rationally assigning degrees of belief to a set of hypotheses. The very act of rational assignment entails that you know what it is.

tom_cr00

Thanks, I was half getting the point, but is this really important, as you say? If my goal is to gain value by assessing whether or not your proposition is true, why would this matter?

If the goal is to learn something about the person you are arguing with (maybe not as uncommon as I'm inclined to think?), then certainly, care must be taken. I suppose the procedure should be to form a hypothesis of the type "Y was stated in an inefficient attempt to express Z," where Z constitutes possible evidence for X, and to examine the plausibility of that hypothesis.

tom_cr00

Not sure if I properly understood the original post - apologies if I'm just restating points already made, but I see it like this.

Whatever it consists of, it's pretty much the definition of rationality that it increases expected utility. Assuming that the intermediate objective of a rationalist technique like steelmanning is to bring us closer to the truth, then there are 2 trivial cases where steelmanning is not rational:

(1) When the truth has low utility. (If a lion starts chasing me, I will temporarily abandon my attempt to find periodicity in the digit... (read more)

2blacktrance
I think the point is that while steelmanning can get you closer to the truth about the conclusion of an argument, it can unintentionally get you further from the truth about what argument a person is making. If I say "X is true because of Y" and you steelman it into "X is true because of Z", it's important to remember that I believe "X is true because of Y" and not "X is true because of Z".
tom_cr10

A few terminological headaches in this post. Sorry for the negative tone.

There is talk of a "fixed but unknown probability," which should always set alarm bells ringing.

More generally, I propose that whenever one assigns a probability to some parameter, that parameter is guaranteed not to be a probability.

I am also disturbed by the mention of Knightian uncertainty, descried as "uncertainty that can't be usefully modeled in terms of probability." Now there's a charitable interpretation of that phrase, and I can see that there may be a ps... (read more)

tom_cr00

Nice discussion of game theory in politics. Is there any theoretical basis for expecting the line-item veto generally to be more harmful than beneficial to the president?

(Not an attempt to belittle the above fascinating example, but genuine interest in any related, more general results of the theory.)

tom_cr30

Perhaps some explanation is in order. (I thought it was quite a witty thought experiment, but apparently it's not appreciated.)

If it is in principle impossible to explain why one ought to do something, then what is the function of the word "ought"? Straightforwardly, it can have none, and we gain nothing by its existence in our vocabulary.

Alternatively, if it is not in principle impossible, then trivially the condition 'ought' (the condition of oughting?) rests entirely upon real facts about the universe, and the position of Randaly is false.

I... (read more)

0Randaly
People are more complicated than you're modeling them as. People have numerous conflicting urges/desires/values/modules. Classicists would say that 'ought' refers to going with the virtuous action; Freudians the superego; Hansonians your far-mode. All of these groups would separately endorse the interactionist (psychology) viewpoint that 'ought' also refers to social pressures to take pro-social actions. (On a side note: it is completely possible to explain why one ought to do something; it merely requires that a specific morality be taken as a given. In practice, all humans' morality tends to be similar, especially in the same culture; and since our morality is not exactly like a utility function, in so far as it has conflicting, non-instrospectively available and changing parts, moral debate is still possible.) Well, yes, one would need the additional claim that one ought to believe the truth. Among humans, for specific cases, this usually goes without saying. No. Would you also argue that you universe is not part of the universe, because some people think it's pretty and others don't?
3tom_cr
Perhaps some explanation is in order. (I thought it was quite a witty thought experiment, but apparently it's not appreciated.) If it is in principle impossible to explain why one ought to do something, then what is the function of the word "ought"? Straightforwardly, it can have none, and we gain nothing by its existence in our vocabulary. Alternatively, if it is not in principle impossible, then trivially the condition 'ought' (the condition of oughting?) rests entirely upon real facts about the universe, and the position of Randaly is false. I know there is some philosophical pedigree behind this old notion, but my investigations yield that it is not possible, under valid reasoning (without butchering the word 'ought'), to assert that ought statements cannot be entirely reduced to is statements, and simultaneously to assert that one ought to believe this, which seems to present a dilemma. I'm glad that Randaly explicitly chose this way of reasoning, as it is intimately linked with my interest in commenting on this post. Everyone accepts that questions relating to the life cycles of stars are questions of fact about the universe (questions of epistemic rationality), but the philosophical pedigree rejects the idea that questions about what is an appropriate way for a person to behave are similar (instrumental rationality) - it seems that people are somehow not part of the universe, according to this wisdom.
tom_cr20

Thanks for bringing that article to my attention.

You explain how you learned skills of instrumental rationality from debating, but in doing so, you also learned reliable answers to questions of fact about the universe: how to win debates. When I'm learning electrostatics I learn that charges come with different polarities. If I later learn about gravity, and that gravitationally everything attracts, this doesn't make the electrostatics wrong! Similarly your debating skills were not wrong, just not the same skills you needed for writing research papers.

Reg... (read more)

3katydee
In a vacuum, this is certainly true and in fact I agree with all of your points. But I believe that human cognitive biases make this sort of compartmentalization between mental skillsets more difficult than one might otherwise expect. As the old saying goes, "To a man with a hammer, everything looks like a nail." It would be fair to say that I believe tradeoffs between epistemic and instrumental rationality exist only thanks to quirks in human reasoning-- however, I also believe that we need to take those quirks into account.
tom_cr00

The terminology is a bit new to me, but it seems to me epistemic and instrumental rationality are necessarily identical.

If epistemic rationality is implementation of any of a set of reliable procedures for making true statements about reality, and instrumental rationality is use of any of a set of reliable procedures for achieving goals, then the latter is contained in the former, since reliably achieving goals entails possession of some kind of high-fidelity model of reality.

Furthermore, what kind of rationality does not pursue goals? If I have no interest in chess, and ability to play chess will have no impact on any of my present or future goals, then it would seem to be irrational of me to learn to play chess.

1Randaly
Loosely speaking, epistemic and instrumental rationality are prescriptions for the two sides of the is/ought gap. While 'ought statements' generally need to make reference to 'is statements', they cannot be entirely reduced to them. One possible goal is to have false beliefs about reality; another is to have no impact on reality. (For humans in particular, there are unquestionably some facts that are both true and harmful (i.e. instrumentally irrational) to learn.) Epistemic rationality. (I assume that you mean 'isn't about pursuing goals.' Otherwise, epistemic rationality might pursue the goal of matching the map to the territory.)
1katydee
Here's a brief post I wrote about tradeoffs between epistemic and instrumental rationality.
tom_cr70

Let's define an action as instrumentally rational if it brings you closer to your goal.

Suppose my goal is to get rich. Suppose, on a whim, I walk into a casino and put a large amount of money on number 12 in a single game of roulette. Suppose number 12 comes up. Was that rational?

Same objection applies to your definition of epistemicaly rational actions.

tom_cr20

Thanks for the welcome.

I'm in Houston.

tom_cr20

As a thought experiment, this is interesting, and I’m sure informative, but there is one crucial thing that this post neglects to examine: whether the inscription under the hood actually reads “humanity maximizer.” The impression from the post is that this is already established.

But has anybody established, or even stopped to consider whether avoiding the loss of 10^46 potential lives per century is really what we value? If so, I see no evidence of it here. I see no reason to even suspect that enabling that many lives in the distant future has any remotely... (read more)

tom_cr00

I haven't had much explicit interaction with these inside/outside view concepts, and maybe I'm misunderstanding the terminology, but a couple of the examples of outside views given struck me as more like inside views: Yelp reviews and the advice of a friend are calibrated instruments being used to measure the performance of a restaurant, ie to build a model of its internal workings.

But then almost immediately, I thought, "hey, even the inside view is an outside view." Every model is an analogy, e.g. an analogy in the sense of this thing A is a bi... (read more)

0Kurros
Keynes in his "Treatise on probability" talks a lot about analogies in the sense you use it here, particularly in "part 3: induction and analogy". You might find it interesting.
tom_cr100

Hi folks

I am Tom. Allow me to introduce myself, my perception of rationality, and my goals as a rationalist. I hope what follows is not too long and boring.

I am a physicist, currently a post-doc in Texas, working on x-ray imaging. I have been interested in science for longer than I have known that 'science' is a word. I went for physics because, well, everything is physics, but I sometimes marvel that I didn't go for biology, because I have always felt that evolution by natural selection is more beautiful than any theory of 'physics' (of course, really it ... (read more)

0Vaniver
Welcome! Where are you in Texas?
Load More