The genie is, after all, all-powerful, so there are any number of subtle changes it could make that you didn't specify against that would immediately make you, or someone else, wish for the world to be destroyed. If that's the genie's goal, you have no chance. Heck, if it can choose it's form it could probably appear as some psycho-linguistic anomaly that hits your retina just right to make you into a person who would wish to end the world.
Really I'm just giving the genie a chance to show me that it's a nice guy. If it's super evil I'm doomed regardless, but this wish test (hopefully) distinguishes between a benevolent genie and one that's going to just be a dick.
A wish is a pretty constrained thing, for some wishes.
If I wish for a pile of gold, my expectations probably constrain lots of externalities like 'Nobody is hurt acquiring the gold, it isn't taken from somewhere else, it is simply generated and deposited at my feet, but not, like, crushing me, or using the molecules of my body as raw material, or really anything that kills me for that matter'. Mostly my expectations are about things that won't happen, not things that will happen that might conflict (that consists only of: the gold will appear before me an...
The scroll modifies your expectations. The genie twist-interprets X, and then assesses your expectations of the result of the genie's interpretation of X. ("Why, that's just what you'd expect destroying the world to do! What are you complaining about?") The complete list of expectations regarding X is at least slightly self-contradictory, so of course the genie has no option except to modify your expectations directly...
That's hardly a critique of the trolley problem. Special relativity itself stipulates that it doesn't apply to faster-than-light movement, but a moral theory can't say "certain unlikely or confusing situations don't count". The whole point of a moral theory is to answer those cases where intuition is insufficient, the extremes you talk about. Imagine where we'd be if people just accepted Newtonian physics, saying "It works in all practical cases, so ignore the extremes at very small sizes and very high speeds, they are faulty models". Of course we don't allow that in the sciences, so why should we in ethics?
I can attest that I had those exact reactions on reading those sections of the article. And in general I am more impressed by someone who graduated quickly than one who took longer than average, and by someone who wrote a book rather than one who hasn't. "But what if that's not the case?" is hardly a knock-down rebuttal.
I think it's more likely you're confusing the status you attribute to Kaj for candidness and usefulness of the post, with the status you would objectively add or subtract from a person if you heard that they floundered or flourished in college.
I don't see how this is admirable at all. This is coercion.
If I work for a charitable organization, and my primary goal is to gain status and present an image as a charitable person, then efforts by you to change my mind are adversarial. Human minds are notoriously malleable, so it's likely that by insisting I do some status-less charity work you are likely to convince me on a surface level. And so I might go and do what you want, contrary to my actual goals. Thus, you have directly harmed me for the sake of your goals. In my opinion this is unacceptable.
It's excessive to claim that the hard work, introspection, and personal -change- (the hardest part) required to align your actions with a given goal are equivalent in difficulty or utility to just taking a pill.
Even if self-help techniques consistently worked, you'd still have to compare the opportunity cost of investing that effort with the apparent gains from reaching a goal. And estimating the utility of a goal is really difficult, especially when it's a goal you've never experienced before.
I underwent a real IQ test when I was young, and so I can say that this estimation significantly overshoots my actual score. But that's because it factors in test-taking as a skill (one that I'm good at). Then again, I'm also a little shocked that the table on that site puts an SAT score of 1420 at the 99.9th percentile. At my high school there were, to my knowledge, at least 10 people with that high of a score (and that's only those I knew of), not to mention one perfect score. This is out of ~700 people. Does that mean my school was, on average, at the 90th percentile of intelligence? Or just at the 90th percentile of studying hard (much more likely I think).
The article spends two paragraphs explaining the link between openness and disease, and then even links to the wikipedia page for parasite load. You link to 'Inferential Distance', but this seems more like a case of 'didn't really read the article' or perhaps 'came into it with really strong pre-conceptions of what it would be about, and didn't bother to update them based on what was actually there'.
What kind of 'morality' are we talking about here? If we're talking about actual systems of morality, deontological/utilitarian/etc, then empathy is almost certainly not required to calculate morally correct actions. But this seems to be talking about intuitive morality. It's asking: is empathy, as a cognitive faculty, necessary in order to develop an internal moral system (that is like mine)?
I'm not sure why this is an important question. If people are acting morally, do we care if it's motivated by empathy? Or put it this way: Is it possible for a psychopath to act morally? I'd say yes, of course, no matter what you mean by morality.
I see what you're getting at with the intuitive concept (and philosophy matching how people actually are, rather than how they should be), but human imperfection seems to open the door to a whole lot of misunderstanding. Like, if someone said we were having fish for dinner, and then served duck, because they thought anything that swims is a fish, well I'd be put out to say the least.
I think my intuition is that my understanding of various concepts should approach the strictness of conceptual analysis. But maybe that's just vanity. After all, border cases can easily be specified (if we're having eel, just say 'eel' rather than 'fish').
I think this is a little unfair. For example, I know exactly what the category 'fish' contains. It contains eels and it contains flounders, without question. If someone gives me a new creature, there are things that I can do to ascertain whether it is a fish. The only question is how quickly I could do this.
We pattern-match on 'has fins', 'moves via tail', etc. because we can do that fast, and because animals with those traits are likely to share other traits like 'is billaterally symetrical' (and perhaps 'disease is more likely to be communicable from similarly shaped creatures'). But that doesn't mean the hard-and-fast 'fish' category is meaningless; there is a reason dolphins aren't fish.
I actually tried the 2-4-6 puzzle on both my brothers, and they both got it right because they thought there was some trick to it and so kept pressing until they were sure (and even after ~20 questions still didn't fully endorse their answers). Maybe I have extra-non-biased brothers (not too likely), or maybe the clinical 2-4-6 test is so likely to be failed because students expect a puzzle and not a trick. That is to say, you are in a position of power over them and they trust you to give them something similar to what they've been given in the past. A...
I feel obliged to point out that Socialdemocracy is working quite well in Europe and elsewhere and we owe it, among other stuff, free universal health care and paid vacations.
It's not fair to say we 'owe' Socialdemocracy for free universal health care and paid vacations, because they aren't so much effects of the system as they are fundamental tenets of the system. It's much like saying we owe FreeMarketCapitalism for free markets - without these things we wouldn't recognize it as socialism. Rather, the question is whether the marginal gain in things like quality of living are worth the marginal losses in things like autonomy. Universal health care is not an end in itself.
My point was meant in the sense that random culling for organs is not the best solution available to us. Organ growth is not that far in the future, and it's held back primarily because of moral concerns. This is not analagous to your parody, which more closely resembles something like: "any action that does not work towards achieving immortality is wrong".
The point is that people always try to find better solutions. If we lived in a world where, as a matter of fact, there is no way whatsoever to get organs for transplant victims except from living donors, then from a consequentialist standpoint some sort of random culling would in fact be the best solution. And I'm saying, that is not the world we live in.
But people still die.
I think a major part of how our instinctive morality works (and a reason humans, as a species, have been so successful) is that we don't go for cheap solutions. The most moral thing is to save everyone. The solution here is a stopgap that just diminishes the urgency of technology to grow organ replacements, and even if short-term consequentially it leaves more people alive, it in fact worsens out long-term life expectancy by not addressing the problem (which is that people's organs get damaged or wear out).
If a train is heading for 5...
I'd call that character humor, where the character of the boss is funny because of his exaggerated stupidity. It wouldn't be funny if the punchline was just the boss getting hit in the face by a pie (well, beyond the inherent humor of pie-to-face situations). Besides, most of the co-workers say idiotic things too!
The high value you place on freedom may be because, in the past, freedom has tended to lead to pleasure. The idea that people are better suited to choosing how to obtain their pleasure makes sense to us now, because people usually know how best to achieve their own subjective pleasure, whereas forced pleasures often aren't that great. But by the time wireheading technology comes around, we'll probably know enough about neurology and psychology that such problems no longer exist, and a computer could well be trusted to tell you what you would most enjoy m...
Exactly the difficulty of solving a Rubik's cube is that it doesn't respond to heuristics. A cube can be 5 moves from solved and yet look altogether a mess, whereas a cube with all but one corner correct is still some 20 moves away from complete (by the methods I looked up at least). In general, -humans- solve a Rubik's cube by memorizing sequences of moves with certain results, and then string these sub-solutions together. An AI, though, probably has the computational power to brute force a solution much faster than it could manipulate the cube.
The mor...
The simple answer is that your choice is also probabilistic. Let's say that your disposition is one that would make it very likely you will choose to take only box A. Then this fact about yourself becomes evidence for the proposition that A contains a million dollars. Likewise if your disposition was to take both, it would provide evidence that A was empty.
Now let's say that you're pretty damn certain that this Omega guy is who he says he is, and that he was able to predict this disposition of yours; then, noting your decision to take only A stands as s...
I simplify here because a lot of people think I will have contradictory expectations for a more complex event.
But I think you're being even more picky here. Do I -expect- that increasing the amount of gold in the world will slightly affect the market value? Yes. But I haven't wished anything related to that, my wish is -only- about some gold appearing in front of me.
Having the genie magically change how much utility I get from the gold is an even more ridiculous extension. If I wish for gold, why the heck would the genie feel it was his job to change m... (read more)