All of CG_Morton's Comments + Replies

I simplify here because a lot of people think I will have contradictory expectations for a more complex event.

But I think you're being even more picky here. Do I -expect- that increasing the amount of gold in the world will slightly affect the market value? Yes. But I haven't wished anything related to that, my wish is -only- about some gold appearing in front of me.

Having the genie magically change how much utility I get from the gold is an even more ridiculous extension. If I wish for gold, why the heck would the genie feel it was his job to change m... (read more)

0kilobug
If you wish for gold, it's because you have expectation on what you'll do with that gold. Maybe fuzzy ones, but if you didn't have any, you wouldn't wish for gold. So you can't dissociate the gold and the use when what you're speaking about is "expectations". Or else, solutions like "the world is changed so that the precious metal is lead, and gold has low value, but all the rest is the same" would work. And that wouldn't meet your "expectations" about the wish at all.

The genie is, after all, all-powerful, so there are any number of subtle changes it could make that you didn't specify against that would immediately make you, or someone else, wish for the world to be destroyed. If that's the genie's goal, you have no chance. Heck, if it can choose it's form it could probably appear as some psycho-linguistic anomaly that hits your retina just right to make you into a person who would wish to end the world.

Really I'm just giving the genie a chance to show me that it's a nice guy. If it's super evil I'm doomed regardless, but this wish test (hopefully) distinguishes between a benevolent genie and one that's going to just be a dick.

0kilobug
If you consider three class of genies : * (A) a genie that's going to be "just be a dick" but is not skilled at it ; * (B) a genie that is benevolent ; * (C) a genie that's going to be "just be a dick" but is very skilled at it. Your test will (may at least) tell apart A from (B or C). It won't tell apart B from C. The "there is no safe wish" rule applies to C. Sure, if your genie is not skilled a being "evil" (having an utility function very different from yours), you can craft a wish that is beyond the genie's ability to twist it. But if the genie is skilled, much more intelligent than you are, with like the ability to spend the equivalent of one million of years of thinking how to twist the wish in one second, he'll find a flaw and use it.

A wish is a pretty constrained thing, for some wishes.

If I wish for a pile of gold, my expectations probably constrain lots of externalities like 'Nobody is hurt acquiring the gold, it isn't taken from somewhere else, it is simply generated and deposited at my feet, but not, like, crushing me, or using the molecules of my body as raw material, or really anything that kills me for that matter'. Mostly my expectations are about things that won't happen, not things that will happen that might conflict (that consists only of: the gold will appear before me an... (read more)

1kilobug
You're already lowering your claim, it's not longer "for any value of X". But even so... "Nobody is hurt acquiring the gold" does that include people hurt because your sudden new gold decrease the market value of gold, so people owning stocks of gold or speculating on an increase of the gold price are hurt ? Sure, you can say "it's insignificant", but how will a genie tell that apart ? Your expectation of what having a sudden supply of gold on the market would do and the reality of how it'll unfold probably don't match. So the genie will have to do corrections for that... which will themselves have side-effects... Also, you'll probably realize once you've some gold that gold doesn't bring you as much as you thought it would bring you (at least, it happens to most lottery winner), so even if you genuinely get the gold, it'll fail to "meet all your expectations" of having gold. Unless the genie also fixes you so you get as much utility/happiness/... from the gold as you expected to get from it. And as soon as the genie has to start fixing you... game over.

I just take this as evidence that I -can't- beat the genie, and don't attempt any more wishes.

Whereas, if it's something simple then I have pretty strong evidence that the genie is -trying- to meet my wishes, that it's a benevolent genie.

Wish 1: "I wish for a paper containing the exact wording of a wish that, when spoken to you, would meet all my expectations for a wish granting X." For any value of X.

Wish 2: Profit.

Three wishes is overkill.

1DanielLC
Couldn't you just wish that all your expectations for a wish granting X were granted, and take out the second step?

The scroll modifies your expectations. The genie twist-interprets X, and then assesses your expectations of the result of the genie's interpretation of X. ("Why, that's just what you'd expect destroying the world to do! What are you complaining about?") The complete list of expectations regarding X is at least slightly self-contradictory, so of course the genie has no option except to modify your expectations directly...

Genie provides a 3,000 foot long scroll, which if spoken perfectly will certainly do as you ask, but if spoken imperfectly in any of a million likely ways affords the genie room to screw you over.

Or the scroll is written in Martian.

5kilobug
I'm pretty sure your belief network is not coherent enough so that it is possible to "meet all your expectations", there must be somewhere two expectations which you hold but which aren't, in fact, compatible. So the wish will fizzle ;)
1TheOtherDave
This presumes, of course, that my expectations for a wish granting X, for some value of X, is such that having a wish granted that meets them is profitable.

That's hardly a critique of the trolley problem. Special relativity itself stipulates that it doesn't apply to faster-than-light movement, but a moral theory can't say "certain unlikely or confusing situations don't count". The whole point of a moral theory is to answer those cases where intuition is insufficient, the extremes you talk about. Imagine where we'd be if people just accepted Newtonian physics, saying "It works in all practical cases, so ignore the extremes at very small sizes and very high speeds, they are faulty models". Of course we don't allow that in the sciences, so why should we in ethics?

0fubarobfusco
The analogy between moral theories and physics seems to suggest that just as we expect modern physics to act like Newtonian physics when dealing with big slow objects, we should expect some modern moral theory to act like folk morality when dealing with ordinary human life situations. Does that hold?
4Dmytry
In the practical reasoning, "A" is a shorthand for "I think A is true", et cetera - no absolute knowledge, nonzero false positive rate, and sufficiently refined moral theory has to take this into account. Just as thought experiment relying on e.g. absolute simultaneity would render itself irrelevant to special or general relativity, so does trolley problem's implicit assumption of absolute, reliable knowledge render it irrelevant to the extreme cases where the probability of event is much smaller than false positive rate.

I can attest that I had those exact reactions on reading those sections of the article. And in general I am more impressed by someone who graduated quickly than one who took longer than average, and by someone who wrote a book rather than one who hasn't. "But what if that's not the case?" is hardly a knock-down rebuttal.

I think it's more likely you're confusing the status you attribute to Kaj for candidness and usefulness of the post, with the status you would objectively add or subtract from a person if you heard that they floundered or flourished in college.

3sark
What I has in mind was his devotion to the cause, even as it ultimately harmed it, we think more than compensates for his lack of strategic foresight and late graduation. With that book, we think of him less for not contributing in a more direct way to the book, even as we abstractly understand what a vital job it was. Though of course that may just be me.

I don't see how this is admirable at all. This is coercion.

If I work for a charitable organization, and my primary goal is to gain status and present an image as a charitable person, then efforts by you to change my mind are adversarial. Human minds are notoriously malleable, so it's likely that by insisting I do some status-less charity work you are likely to convince me on a surface level. And so I might go and do what you want, contrary to my actual goals. Thus, you have directly harmed me for the sake of your goals. In my opinion this is unacceptable.

It's excessive to claim that the hard work, introspection, and personal -change- (the hardest part) required to align your actions with a given goal are equivalent in difficulty or utility to just taking a pill.

Even if self-help techniques consistently worked, you'd still have to compare the opportunity cost of investing that effort with the apparent gains from reaching a goal. And estimating the utility of a goal is really difficult, especially when it's a goal you've never experienced before.

You are quite right. My scores correlate much better now; I retract my confusion.

I underwent a real IQ test when I was young, and so I can say that this estimation significantly overshoots my actual score. But that's because it factors in test-taking as a skill (one that I'm good at). Then again, I'm also a little shocked that the table on that site puts an SAT score of 1420 at the 99.9th percentile. At my high school there were, to my knowledge, at least 10 people with that high of a score (and that's only those I knew of), not to mention one perfect score. This is out of ~700 people. Does that mean my school was, on average, at the 90th percentile of intelligence? Or just at the 90th percentile of studying hard (much more likely I think).

6cata
If you're in the median age band for Less Wrong, you misread the estimator. The "SAT to IQ" table is for the pre-1995 SAT, which had much more rarefied heights. The "SAT I to IQ" table is for the 1995-2005 SAT. (I did the same thing.)
2Desrtopa
And of course, there are also SAT prep services which offer guarantees of raising your score by such and such an amount (my mother thought I ought to try working for one, given my own SAT scores and the high pay, but I don't want to join the Dark Side and work in favor of more inequality of education by income,) and these services are almost certainly not raising their recipients' IQs.

The article spends two paragraphs explaining the link between openness and disease, and then even links to the wikipedia page for parasite load. You link to 'Inferential Distance', but this seems more like a case of 'didn't really read the article' or perhaps 'came into it with really strong pre-conceptions of what it would be about, and didn't bother to update them based on what was actually there'.

2komponisto
...which is no more than a stub, and suffers from the same problem. In fact, I was probably even more irritated by the Wikipedia article than the post. It abruptly mentions "openness to experience" as if the reader were perfectly well expecting a discussion of human personality in an entry on biological parasites. I was able to comprehend the article, but it didn't feel satisfactory. The problem was that I was "offended" by the unprepared juxtaposition of concepts that I wasn't expecting to be juxtaposed. You could call this a "really strong pre-conception of what it would be about", in a negative sense: I didn't think it would be about that. This is exactly what inferential distance is: when the writer is "on a different planet" from the reader.

What kind of 'morality' are we talking about here? If we're talking about actual systems of morality, deontological/utilitarian/etc, then empathy is almost certainly not required to calculate morally correct actions. But this seems to be talking about intuitive morality. It's asking: is empathy, as a cognitive faculty, necessary in order to develop an internal moral system (that is like mine)?

I'm not sure why this is an important question. If people are acting morally, do we care if it's motivated by empathy? Or put it this way: Is it possible for a psychopath to act morally? I'd say yes, of course, no matter what you mean by morality.

I see what you're getting at with the intuitive concept (and philosophy matching how people actually are, rather than how they should be), but human imperfection seems to open the door to a whole lot of misunderstanding. Like, if someone said we were having fish for dinner, and then served duck, because they thought anything that swims is a fish, well I'd be put out to say the least.

I think my intuition is that my understanding of various concepts should approach the strictness of conceptual analysis. But maybe that's just vanity. After all, border cases can easily be specified (if we're having eel, just say 'eel' rather than 'fish').

6lukeprog
Sure. But that normative claim is different than the descriptive claim I made about concepts.

I think this is a little unfair. For example, I know exactly what the category 'fish' contains. It contains eels and it contains flounders, without question. If someone gives me a new creature, there are things that I can do to ascertain whether it is a fish. The only question is how quickly I could do this.

We pattern-match on 'has fins', 'moves via tail', etc. because we can do that fast, and because animals with those traits are likely to share other traits like 'is billaterally symetrical' (and perhaps 'disease is more likely to be communicable from similarly shaped creatures'). But that doesn't mean the hard-and-fast 'fish' category is meaningless; there is a reason dolphins aren't fish.

6roystgnr
I'm guessing you'd quickly say "yes" for Panderichthys and "no" for Acanthostega... but what about Tiktaalik? Or if that's too easy to answer (which answer?), pick any clear amphibian and start looking at its ancestors. Is there a clear line where "this is not a fish, but its mother is"? We think of ring species as rare populations with interesting spatial distributions, but thanks to common descent every living thing is part of one big multi-ring species with a very interesting space-time distribution. It's hard to categorize living things, in part because the obvious ideas for equivalence relations turned out to not be inherently transitive.
3scav
It may not be a very good reason. To quote Wikipedia: In other words, there are probably fish that are more distantly related to each other than one of them is to a dolphin (or you).
6lukeprog
Are you talking about the biologist's stipulated definition of "fish"? This is different than one's intuitive concept.
-2jmmcd
Good point. The initial experiment couldn't even have been carried out without the biological definition of the fish category. If I'm asked to rate various fish as more or less typical, on a scale of 1 to 10, then I'll give very different answers depending on whether 1 means "least typical of all biologically-defined fish" or "mammal" or "flower" or "pair of headphones".

I actually tried the 2-4-6 puzzle on both my brothers, and they both got it right because they thought there was some trick to it and so kept pressing until they were sure (and even after ~20 questions still didn't fully endorse their answers). Maybe I have extra-non-biased brothers (not too likely), or maybe the clinical 2-4-6 test is so likely to be failed because students expect a puzzle and not a trick. That is to say, you are in a position of power over them and they trust you to give them something similar to what they've been given in the past. A... (read more)

4crazy88
I've had the same result. I tried it on friends and family and 3 of 5 got it right. However, in this group no-one got it right. I think the best solution is to run a number of puzzles in conjunction. Not everyone will fall for all of them but most people will fall for some of them. Maybe I should emphasise that point more.
4thomblake
Yes, that's the result I normally get anecdotally, except the time I described the puzzle so badly the subjects didn't know what to do.

I feel obliged to point out that Socialdemocracy is working quite well in Europe and elsewhere and we owe it, among other stuff, free universal health care and paid vacations.

It's not fair to say we 'owe' Socialdemocracy for free universal health care and paid vacations, because they aren't so much effects of the system as they are fundamental tenets of the system. It's much like saying we owe FreeMarketCapitalism for free markets - without these things we wouldn't recognize it as socialism. Rather, the question is whether the marginal gain in things like quality of living are worth the marginal losses in things like autonomy. Universal health care is not an end in itself.

3Raw_Power
I dunno man, maybe it's a confusion on my part, but universal health coverage for one thing seems like a good enough goal in and of tiself. Not specifically in the form of a State-sponsored organziation, but the fuction of everyone having the right to health treatments, of no-one being left to die just because they happen not to have a given amount of money at a given time, I think that, from a humanistic point of view, it's sort of obvious that we should have it if we can pay for it.

My point was meant in the sense that random culling for organs is not the best solution available to us. Organ growth is not that far in the future, and it's held back primarily because of moral concerns. This is not analagous to your parody, which more closely resembles something like: "any action that does not work towards achieving immortality is wrong".

The point is that people always try to find better solutions. If we lived in a world where, as a matter of fact, there is no way whatsoever to get organs for transplant victims except from living donors, then from a consequentialist standpoint some sort of random culling would in fact be the best solution. And I'm saying, that is not the world we live in.

But people still die.

I think a major part of how our instinctive morality works (and a reason humans, as a species, have been so successful) is that we don't go for cheap solutions. The most moral thing is to save everyone. The solution here is a stopgap that just diminishes the urgency of technology to grow organ replacements, and even if short-term consequentially it leaves more people alive, it in fact worsens out long-term life expectancy by not addressing the problem (which is that people's organs get damaged or wear out).

If a train is heading for 5... (read more)

5Eugine_Nier
Related, here is Eliezer's answer to the railroad switch dilemma from the ends don't justify the means (among humans):
9Kingreaper
[parody mode] [/parody mode] Have you ever heard the saying "the perfect is the enemy of the good"? By insisting that only perfect solutions are worthwhile, you are arguing against any measure that doesn't make humans immortal.
3lessdazed
Why stop there? Why not say that the moral thing is to save even more people than are present, or will ever be born, etc.?

In the 1 red/10 beans scenario, you can only win once, no matter how hard you try. With 7 read/100 beans, you simply play the game 100 times, draw 7 red beans, and end up with 7x more money.

Unless the beans are replaced, in which case yeah, what the hell were they thinking?

I think the idea of the game was you get one chance to pick a bean. After all, if you can just keep picking beans until you've picked all the reds, there's not really much point to the so-called game anymore, is there?

I'd call that character humor, where the character of the boss is funny because of his exaggerated stupidity. It wouldn't be funny if the punchline was just the boss getting hit in the face by a pie (well, beyond the inherent humor of pie-to-face situations). Besides, most of the co-workers say idiotic things too!

[This comment is no longer endorsed by its author]Reply

The high value you place on freedom may be because, in the past, freedom has tended to lead to pleasure. The idea that people are better suited to choosing how to obtain their pleasure makes sense to us now, because people usually know how best to achieve their own subjective pleasure, whereas forced pleasures often aren't that great. But by the time wireheading technology comes around, we'll probably know enough about neurology and psychology that such problems no longer exist, and a computer could well be trusted to tell you what you would most enjoy m... (read more)

Exactly the difficulty of solving a Rubik's cube is that it doesn't respond to heuristics. A cube can be 5 moves from solved and yet look altogether a mess, whereas a cube with all but one corner correct is still some 20 moves away from complete (by the methods I looked up at least). In general, -humans- solve a Rubik's cube by memorizing sequences of moves with certain results, and then string these sub-solutions together. An AI, though, probably has the computational power to brute force a solution much faster than it could manipulate the cube.

The mor... (read more)

0DanielLC
I don't know the methods you used, but the only ones I know of have certain "steps" where you can easily tell what step it's on. For example, by one method, anything that's five moves away will have all but two sides complete.

The simple answer is that your choice is also probabilistic. Let's say that your disposition is one that would make it very likely you will choose to take only box A. Then this fact about yourself becomes evidence for the proposition that A contains a million dollars. Likewise if your disposition was to take both, it would provide evidence that A was empty.

Now let's say that you're pretty damn certain that this Omega guy is who he says he is, and that he was able to predict this disposition of yours; then, noting your decision to take only A stands as s... (read more)