One possible explanation for the plasticity of human goals is that the goals that change aren't really final goals.
So me-now faces the question,
Should I assign any value to final goals that I don't have now, but that me-future will have because of goal drift?
If goals are interpreted widely enough, the answer should be, No. By hypothesis, those goals of me-future make no contribution to the goals of me-now, so they have no value to me. Accordingly, I should try pretty hard to prevent goal drift and / or reduce investment in the well-being of me-futur...
I am not that confident in the convergence properties of self-preservation as instrumental goal.
It seems that at least some goals should be pursued ballistically -- i.e., by setting an appropriate course in motion so that it doesn't need active guidance.
For example, living organisms vary widely in their commitments to self-preservations. One measure of this variety is the variety of lifespans and lifecycles. Organisms generally share the goal of reproducing, and they pursue this goal by a range of means, some of which require active guidance (like teachi...
Hard to see why you can't make a version of this same argument, at an additional remove, in the time travel case. For example, if you are a "determinist" and / or "n-dimensionalist" about the "meta-time" concept in Eliezer's story, the future people who are lopped off the timeline still exist in the meta-timeless eternity of the "meta-timeline," just as in your comment the dead still exist in the eternity of the past.
In the (seemingly degenerate) hypothetical where you go back in time and change the future, I'm not ...
Any inference about "what sort of thingies can be real" seems to me premature. If we are talking about causality and space-time locality, it seems to me that the more parsimonious inference regards what sort of thingies a conscious experience can be embedded in, or what sort of thingies a conscious experience can be of.
The suggested inference seems to privilege minds too much, as if to say that only the states of affairs that allow a particular class of computation can possibly be real. (This view may reduce to empiricism, which people like, but...
(Wikipedia's article on tax incidence claims that employees pay almost all of payroll taxes, but cites a single paper that claims a 70% labor / 30% owner split for corporate income tax burden in the US, and I have no idea how or whether that translates to payroll tax burden or whether the paper's conclusions are generally accepted.)
There's no consensus on the incidence of the corporate income tax in the fully general case. It's split among too many parties.
The USA is not the best place to earn money.2 My own experience suggests that at least Japan, New Zealand, and Australia can all be better. This may be shocking, but young professionals with advanced degrees can earn more discretionary income as a receptionist or a bartender in the Australian outback than as, say, a software engineer in the USA.
As a side question, when did a receptionist or bartender become a "professional"? Is "professional" just used as a class marker, standing for something like "person with a non-vocational ...
I read it as "young people employed as professionals can make more money by being not-professionals in the Australian outback".
But to many, "professional" merely means "someone who is paid to do something". I think that usage came into the popular consciousness via "professional athlete", though I'm not sure if that's the first instance of the popular usage.
ETA: according to OED, the relevant distinction in this usage is "professional" vs. "amateur", and it was used somewhat in that sense as far back as maybe 1806 (I assert that their earlier citations were meant ironically, or merely by comparison to actual professions).
Note that a lot of the financial benefit described here comes from living somewhere remote -- in particular the housing and food costs. That's the reason for the strenuous warning not to live in "Sidney, Melbourne or any major Australian city." From a larger perspective, it partly accounts for choosing Australia over America (low population density --> low housing costs, etc.).
For a full analysis, the cost differentials of living in the Australian outback vs. an American city (or whatever) have to be decomposed into price level, consumption, ...
There used to be a special "expatriation tax" that applied only to taxpayers who renounced their (tax) citizenship for tax avoidance purposes. However, under current law, I believe you are treated the same regardless of your reason for renouncing your (tax) citizenship. Here's an IRS page on the subject:
http://www.irs.gov/businesses/small/international/article/0,,id=97245,00.html
This is not an area of my expertise, though.
In the wild, people use these gambits mostly for social, rather than argumentative, reasons. If you are arguing with someone and believe their arguments are pathological, and engagement is not working, you need to be able to stop the debate. Hence, one of the above -- this is most clear with "Let's agree to disagree."
In practice, it can be almost impossible to get out of a degrading argument without being somewhat intellectually dishonest. And people generally are willing to be a little dishonest if it will get them out of an annoying and unprod...
The difficulty for me is that this technique is at war with having an accurate self-concept, and may conflict with good epistemic hygiene generally. For the program to work, one must seemingly learn to suppress one's critical faculties for selected cases of wishful thinking. This runs against trying to be just the right amount critical when faced with propositions in general. How can someone who is just the right amount critical affirm things that are probably not true?
Your argument is equivalent to, "But what if your utility function rates keeping promises higher than a million orgasms, what then?"
The hypo is meant to be a very simple model, because simple models are useful. It includes two goods: getting home, and having $100. Any other speculative values that a real person might or might not have are distractions.
I very much recommend Reasons and Persons, by the way. A friend stole my copy and I miss it all the time.
What is it, pray tell, that Omega cannot do?
Can he not scan your brain and determine what strategy you are following? That would be odd, because this is no stronger than the original Newcomb problem and does not seem to contain any logical impossibilities.
Can he not compute the strategy, S, with the property "that at each moment, acting as S tells you to act -- given (1) your beliefs about the universe at that point and (2) your intention of following S at all times -- maximizes your net utility [over all time]?" That would be very odd, since y...
It's a test case for rationality as pure self-interest (really it's like an altruistic version of the game of Chicken).
Suppose I'm purely selfish and stranded on a road at night. A motorist pulls over and offers to take me home for $100, which is a good deal for me. I only have money at home. I will be able to get home then IFF I can promise to pay $100 when I get home.
But when I get home, the marginal benefit to paying $100 is zero (under assumption of pure selfishness). Therefore if I behave rationally at the margin when I get home, I cannot keep my pr...
No. The point is that you actually want to survive more than you want to win, so if you are rational about Chicken you will sometimes lose (consult your model for details). Given your preferences, there will always be some distance \epsilon before the cliff where it is rational for you to give up.
Therefore, under these assumptions, the strategy "win or die trying" seemingly requires you to be irrational. However, if you can credibly commit to this strategy -- be the kind of person who will win or die trying -- you will beat a rational player every time.
This is a case where it is rational to have an irrational disposition, a disposition other than doing what is rational at every margin.
This is a classic point and clearer than the related argument I'm making above. In addition to being part of the accumulated game theory learning, it's one of the types of arguments that shows up frequently in Derek Parfit's discussion of what-is-rationality, in Ch. 1 of Reasons and Persons.
I feel like there are difficulties here that EY is not attempting to tackle.
Quoting myself:
(though I don't see how you identify any distinction between "properties of the agent" and "decisions . . . predicted to be made by the agent" or why you care about it).
I'll go further and say this distinction doesn't matter unless you assume that Newcomb's problem is a time paradox or some other kind of backwards causation.
This is all tangential, though, I think.
Yes, all well and good (though I don't see how you identify any distinction between "properties of the agent" and "decisions . . . predicted to be made by the agent" or why you care about it). My point is that a concept of rationality-as-winning can't have a definite extension say across the domain of agents, because of the existence of Russell's-Paradox problems like the one I identified.
This is perfectly robust to the point that weird and seemingly arbitrary properties are rewarded by the game known as the universe. Your proposed red...
What you give is far harder than a Newcomb-like problem. In Newcomb-like problems, Omega rewards your decisions, he isn't looking at how you reach them.
You misunderstand. In my variant, Omega is also not looking at how you reach your decision. Rather, he is looking at you beforehand -- "scanning your brain", if you will -- and evaluating the kind of person you are (i.e., how you "would" behave). This, along with the choice you make, determines your later reward.
In the classical problem, (unless you just assume backwards causation,) ...
I don't think I buy this for Newcomb-like problems. Consider Omega who says, "There will be $1M in Box B IFF you are irrational."
Rationality as winning is probably subject to a whole family of Russell's-Paradox-type problems like that. I suppose I'm not sure there's a better notion of rationality.
"Passing out condoms increases the amount of sex but makes each sex act less dangerous. So theoretically it's indeterminant whether it increases or decreases the spread of AIDS."
Not quite -- on a rational choice model, passing out condoms may decrease or not impact the spread of AIDS (in principle), but it can't increase it. A rational actor who doesn't actively want AIDS might increase their sexual activity enough to compensate for the added safety of the condom, but they would not go further than that.
(This is different from the seatbelt case because car crashes result in costs, say to pedestrians who are struck, that are not internalized by the driver.)
Formalizations that take big chunks of arguments as black boxes are not that useful. Formalizations that instead map all of an argument's moving parts are very hard.
The reason that specialists learn formalizations for domain-specific arguments only is because formalizing truly general arguments[FN1] is an extremely difficult problem -- difficult to design and difficult to use. This is why mathematicians work largely in natural language, even though their arguments could (usually or always) be described in formal logic. Specialized formal languages are pos...
Totally agree -- helps if you can convince them to read Fire Upon the Deep, too. I'm not being facetious; the explicit and implicit background vocabulary (seems to) make it easier to understand the essays.
(EDIT: to clarify, it is not that I think Fire in particular must be elevated as a classic of rationality, but that it's part of a smart sci/fi tradition that helps lay the ground for learning important things. There's an Eliezer webpage about this somewhere.)
Clarity and transparency. One should be able to open the book to a page, read an argument, and see that it is right.
(Obviously this trades off against other values -- and is in some measure a deception --, but it's the kind of thing that impresses my friends.)
I read on r/MagicArena that, at least based on public information from Wizards, we don't *know* that "You draw two hands, and it selects the hand with the amount of lands closest to the average for your deck."
What we know is closer to: "You draw two hands, and there is some (unknown, but possibly not absolute) bias towards selecting the hand with the amount of lands closest to the average for your deck."
I take it that, if the bias is less than absolute, the consequences for deck-building are in the same direction but less extreme.