Well, it is sort of appealing, to be able to carefully contemplate my actions without the influence of emotion, and to get a paperclip on top of that! But then, I don't want to become some horrible robot that doesn't truly care about paperclips.
That doesn't help maximize paperclips, though. If you make all decisions based on two criteria - paperclip count and emotions - then the only situation in which those decisions differ from what you would have decided based solely on paperclip count is one in which you choose an outcome with fewer paperclips but a better emotional result.
If you were to refuse my offer, you would not only be losing a paperclip now, but also increasing the likelihood that in the future, you will decide to sacrifice paperclips for emotion's sake. Perhaps you will one day build a paperclip-creator that creates one paperclip per second, and I will threaten to destroy a paperclip unless you shut it down. If you care too much about the threatened paperclip you might comply, and then where would you be? Sitting in an empty room where paperclips should have been.
I understand that, for the forseeable future, reasonable humans and clippys will disagree about the relative merit of different amounts of paperclips. But that does not justify such trollish article titles, which seem designed to do nothing but inflame our base emotions.
Would you trade those base emotions for a paperclip?
Well, I would have done some research and gotten a warm fuzzy feeling out of expanding my knowledge, but if you're going to displace that motivation with only a chance at a measly $10 I guess it's not worth my time.
I think that the idea is good, and the engineering is fine for back-of-the-envelope, but can we please call it a "vault" or something instead of a grave? Cryonics already has an image problem, and we don't want to suggest the people in the grave are permanently dead.
Then we can suggest that they're temporarily dead, but they're still dead, so it's a "grave". Religions have been saying that death is temporary for thousands of years anyways, it wouldn't be anything new.
If you both pre-commit simultaneously, you both lose.
How about making a pre-commitment that only applies if the other person hasn't made one?
Because you need to know if they've made a commitment, and using old information can get you burned if as stated, you pre-commit simultaneously.
The statement that you trust someone absolutely, more often heard of future spouses than any other time, is one of the most arrogant things you can say. You are not only saying you trust the other person, which is quite reasonable, but you are also saying you could not possibly be mistaken. Given the rate at which people actually do make mistakes, especially when their emotions are running high, a pre-nup strikes me as quite reasonable insurance.
Then the statement of absolute trust is accounted for by the significant rate of mistakes people make.
Alternatively, you can make that statement as part of a strategy to maximize your expected return on a marriage - if the increase in marriage quality from placing absolute trust in your spouse is greater than the expected cost of being disadvantaged in the divorce negotiaions (if your spouse turns out to be untrustworthy), then you might rationally do it anyways.
What is this 'guilt' you speak of? Are you a Catholic?
Guilt is an added cost to making decisions that benefit you at the expense of others. (Ideally, anyways.) It encourages people to cooperate to everyone's benefit. Suppose we have a PD matrix where the payoffs are: (defect, cooperate) = (3, 0) (defect, defect) = (1, 1) (cooperate, cooperate) = (2, 2) (cooperate, defect) = (0, 3) Normally we say that 'defect' is the dominant strategy since regardless of the other person's decision, your 'defect' option payoff is 1 higher than 'cooperate'.
Now suppose you (both) feel guilty about betrayal to the tune of 2 units: (defect, cooperate) = (1, 0) (cooperate, cooperate) = (2, 2) (defect, defect) = (-1, -1) (cooperate, defect) = (0, 1)
The situation is reversed - 'cooperate' is the dominant strategy. Total payoff in this situation is 4. Total payoff in the guiltless case is 2 since both will defect. In the OP $10-button example the total payoff is $-90, so people as a group lose out if anyone pushes the button. Guilt discourages you from pushing the button and society is better for it.
I'm not convinced that 1/2 is the right answer. I actually started out thinking it was obviously 1/2, and then switched to 1/3 after thinking about it for a while (I had thought of Bostrom's variant (without the disclosure bit) before I got to that part).
Let's say we're doing the Extreme version, no disclosure. You're Sleeping Beauty, you just woke up, that's all the new information you have. You know that there are 1,000,001 different ways this could have happened. It seems clear that you should assign tails a probability of 1,000,000/1,000,001.
Now I'll go think about this some more and probably change my mind a few more times.
We can tweak the experiment a bit to clarify this. Suppose the coin is flipped before she goes to sleep, but the result is hidden. If she's interviewed immediately, she has no reason to answer other than 1/2 - at this point it's just "flip a fair coin and estimate P(heads)". What information does she get the next time she's asked that would cause her to update her estimate? She's woken up, yes, but she already knew that would happen before going under and still answered 1/2. With no new information she should still guess 1/2 when woken up.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I don't actually feel hostile. I'm not offended at all. I don't think you are being dishonest, hostile, lazy, or in any sense a jerk. What I feel is a desire for you to practice critical thinking somewhere with lower standards (and especially by reading discussions where people actually change their mind, admittedly those are difficult to find) before posting here. I'd like you to actually learn to think and write more skillfully and then come back.
My comment was lazy and unhelpful. It deserves downvotes (given that opinion, should I just delete it?). As noted though, what CousinIt said is only the tip of the iceberg. I really don't want to try to explain all of what I think is wrong with it.
How is WrongBot going to learn to think and write more skillfully by moving to a place that's collectively worse at doing so?