Comment author: V_V 20 September 2012 01:45:23PM *  6 points [-]

since we won't be considering uncertainty,

this model is not useful for making decisions in the real world.

Seriously, why this idiosyncratic position on the diversification of charity donations? How is it different from diversification of investments?

It is common knowledge that diversification is a strategy used by risk-adverse agents to counter the negative effects of uncertainty. If there is no uncertainty, it's obviously true that you should invest everything in the one thing that gives the highest utility (as long as the amount of money you invest is small enough that you don't run into saturation effects, that is, as long as you can make the local linearity appoximation).

Why would charities behave any differently than profit-making assets? Do you think that charities have less uncertainties? That's far from obvious. In fact, typical charities might well have more uncertainties, since they seem to be more difficult to evaluate.

Comment author: FAWS 20 September 2012 02:54:26PM *  5 points [-]

The logic requires that your donations are purely altruistically motivated and you only care for good outcomes.

E. g. take donating to one of the organizations A, or B for cancer research. If your donations are purely altruistic and the consequences are the same you should have no preference on which of the organizations finds a new treatment. You have no reason to distinguish the case of you personally donating $ 1000 to both organizations and someone else doing the same from you donating $2000 to A and someone else donating $2000 to B. And once the donations are made you should have no preference between A or B finding the new treatment.

So the equivalent to your personal portfolio when making investments aren't your personal donations, but the aggregate donations of everyone. And since you aren't the only one making donations the donations are already diversified, so you are free to pick something underrepresented with high yield (which will almost certainly still be underrepresented afterwards). If you manage 0.1% of a $ 10,000,000 portfolio with 90% in government bonds it makes no sense to invest any of that 0.1% in government bonds in the name of diversification.

Comment author: ArisKatsaris 14 September 2012 05:37:29PM *  3 points [-]

"From the inside, the program experiences no mechanisms of reduction of these atomic qualia"

Materialism predicts that algorithms have an "inside"?

As a further note, I'll have to say that if all the blue and if the red in my visual experience were switched around, my hunch tells me that I'd be experiencing something different; not just in the sense of different memory associations but that the visual experience itself would be different. It would not just be that "red" is associated with hot, and that "blue" is associated with cold... The qualia of the visual experience itself would be different.

Comment author: FAWS 17 September 2012 04:21:40PM 1 point [-]

That thought experiment doesn't make much sense. If the experiences were somehow switched, but everything else kept the same (i .e all your memories and associations of red are still connected to each other and everything else in the same way) you wouldn't notice the difference; everything would still match your memories exactly. If there even is such a thing as raw qualia there is no reason to suppose they are stable from one moment to the other; as long as the correct network of associations is triggered there is no evolutionary advantage either way.

Comment author: Xachariah 08 July 2012 06:31:18AM 2 points [-]

I still classify it as blackmail.

Something similar to this happened to Cameron Diaz although the rights to resell the photos were questionable. She posed topless in some bondage shots for a magazine, but they were never printed. The photographer kept the shots and the recording of the photo shoot for ten years until one of the Charlie's Angel's films was about to come out. He offered them to her for a couple of million or he would sell them to the highest bidder. The courts didn't buy that he was just offering her first right of refusal and sentenced him for attempted grand theft (blackmail), forgery, and perjury (for modifying release forms and lying about it). Link

Comment author: FAWS 08 July 2012 09:38:10AM *  1 point [-]

Are you sure you aren't just pattern matching to similarity to known types of blackmail? Do you think it would be useful for an AI to classify it the same way (which was the starting point of this thread)?

Your link doesn't go into much detail, but it seems like he was convicted because he was lying and making up the negative consequences he threatened her with, and like he was going out of his way to make the consequences of selling to someone else as bad as possible rather than maximizing revenue (or at least making her believe so). That would qualify this case as blackmail under the definition above, unlike either of our hypothetical examples.

Comment author: Xachariah 08 July 2012 12:45:32AM 6 points [-]

Counterexample:

A man seduces a female movie star into a one night stand and secretly records a sex tape. He would prefer to blackmail the movie star for lots of money, but if that fails he would rather release the tape to the press for a smaller amount of money + prestige than he would just do nothing. The movie star's preference ordering is for nothing to happen, for her to pay out, then lastly for the press to find out. The optimal choice is for her to pay out, because if she pre-commits to not give in to blackmail, she will receive the worst possible outcome.

This seems to fall squarely under blackmail, yet requires no pre-committment, iteration, or bluffing.

Comment author: FAWS 08 July 2012 02:51:14AM *  0 points [-]

That's not blackmail at all. It seems like blackmail because of the questionable morality of selling secretly recorded sex tapes, but giving the movie star the chance to buy the tape first doesn't make the whole thing any less moral than it would be without that chance, and unlike real blackmail the movie star being known not to respond to blackmail doesn't help in any way.

Consider this variation: Instead of a secret tape the movie star voluntarily participated in an amateur porno that was intended to be publicly released from the beginning, but held up for some reason, and all that happened before the movie star became famous in the first place. The producer knows that releasing the tape will hurt her career and offers her to buy the tape to prevent it from being released. This doesn't seem like blackmail at all, and the only change was to the moral (and legal) status of releasing the tape, not to the trade.

Comment author: drethelin 06 July 2012 08:17:14PM 0 points [-]

Life insurance is cheaper if you get it when you're young and healthy, so it might make sense to have it before you actually get dependents, if you plan on having them.

Comment author: FAWS 06 July 2012 08:27:34PM *  1 point [-]

Cheaper by enough to make up for the extra years you pay premiums in? E. g. getting life insurance at 25 will have cost less than getting life insurance at 40 by the time you are 60? If so, why would insurance companies set the rates that way? Are people who get life insurance early so much more responsible that they are significantly less likely to die even at higher ages?

Comment author: Andreas_Giger 04 July 2012 05:33:36PM *  0 points [-]

Let us assume a repeated game where an agent is presented with a decision between A and B, and Omega observes that the agent chooses A in 80% and B in 20% of the cases.

If Omega now predicts the agent to choose A in the next instance of the game, then the probability of the prediction being correct is 80% - from Omega's perspective as long as the roll hasn't been made, and from the agent's perspective as long as no decision has been made. However, once the decision has been made, the probability of the prediction being correct from the perspective of the agent is either 100% (A) or 0% (B).

If, instead, Omega is a ten-sided die with 8 A-sides and 2 B-sides, then the probability of the prediction being correct is 68% - from Omega's perspective, and from the agent's perspective as long as no decision has been made. However, once the decision has been made, the probability of the prediction being correct from the perspective of the agent is either 80% (A) or 20% (B).

If the agent knows that Omega makes the prediction before the agent makes the decision, then the agent cannot make different decisions without affecting the probability of the prediction being correct, unless Omega's prediction is a coin toss (p=0.5).

The only case where the probability of Omega being correct is unchangeable with p≠0.5 is the case where the agent cannot make different decisions, which I call "no free will".

Comment author: FAWS 04 July 2012 06:49:17PM *  2 points [-]

You are using the wrong sense of "can" in "cannot make different decisions". The every day subjective experience of "free will" isn't caused by your decisions being indeterminate in an objective sense, that's the incoherent concept of libertarian free will. Instead it seems to be based on our decisions being dependent on some sort of internal preference calculation, and the correct sense of "can make different decisions" to use is something like "if the preference calculation had a different outcome that would result in a different decision".

Otherwise results that are entirely random would feel more free than results that are based on your values, habits, likes, memories and other character traits, i. e. the things that make you you. Not at all coincidentally this is also the criterion whether it makes sense to bother thinking about the decision.

You yourself don't know the result of the preference calculation before you run it, otherwise it wouldn't feel like a free decision. But whether Omega knows the result in advance has no impact on that at all.

Comment author: Will_Newsome 30 May 2012 09:11:12PM *  1 point [-]

This post has thus far gotten an upvote and two [eta:3] downvotes. Downvoters: what do you dislike about this post? Please let me know so I can accommodate your discussion-section-content preferences in the future. Thanks for any feedback!

Comment author: FAWS 31 May 2012 10:55:21AM 4 points [-]

You mostly talk about your new blog instead of the idea the post claims to be about, and the post largely sounds like an advertisement. Two paragraphs summarizing your idea and one sentence talking about the blog (preferably worded as a disclaimer instead of an advertisement) would have been better.

Comment author: amit 16 April 2012 12:32:54AM 2 points [-]

You're not saying that if I perform QS I should literally anticipate that my next experience will be from the past, are you? (AFAICT, if QS is not allowed, I should just anticipate whatever I would anticipate if I was going to die everywhere, that is, going to lose all of my measure.)

Comment author: FAWS 25 April 2012 12:18:13AM *  1 point [-]

(Not Will, but I think I mostly agree with him on this point)

There is no such thing as an uniquely specified "next experience". There are going to be instances of you that remember being you and consider themselves the same person as you, but there is no meaningful sense in which exactly one of them is right. Granted, all instances of you that remember a particular moment will be in the future of that moment, but it seems silly to only care about the experiences of that subset of instances of you and completely neglect the experiences of instances that only share your memories up to an earlier point. If you weight the experiences more sensibly then in the case of a rigorously executed quantum suicide the bulk of the weight will be in instances that diverged before the decision to commit quantum suicide. There will be no chain of memory leading from the QS to those instances, but why should that matter?

Comment author: dspeyer 18 April 2012 02:39:59PM 0 points [-]

But the Potterverse is dualist. Even if horcruxes get some massive retcon, animagi preserve that in MOR.

So maybe souls are immune to the normal patterns of time and causality, and a decision from the soul has special properties for prophecy. Only when all involved souls have chosen does the timestream become fixed enough for prophecies. I'm not sure what that means for time turners. Maybe people who have gone back are out of contact with their souls.

This would cost the story applicability, but it is a story, not a treatise.

Comment author: FAWS 18 April 2012 07:06:59PM *  2 points [-]

Mere dualism isn't enough to save libertarian free will. To the extent your decision is characteristic of you it is at least in principle predictable, at least probabilistically. The non-predictable component of your decision process is by necessity not even in principle distinguishable from that of Gandhi or Hitler in any way. So how can you call the result of the non-predictable component deciding with your free will?

Comment author: David_Gerard 18 April 2012 07:17:49AM *  7 points [-]

In the 40,000 years since anatomically modern humans had migrated to Australia from Asia

BTW - this was the accepted figure as of 1991, but molecular evidence suggests 62,000-75,000 years. Which makes Harry's point even more strongly: it took a long time for humans as we know them to invent what we think of as basic stuff.

Comment author: FAWS 18 April 2012 07:28:29AM *  7 points [-]

At a cursory glace the date you cite seems to be for the time the population they are descended from split from African populations, not for when they arrived in Australia. Genetic evidence cannot show where your ancestors lived, only how they were related to other populations (which might imply things about where they lived provided you already know that for the other populations)

View more: Prev | Next