Comment author: Kaj_Sotala 02 July 2013 04:39:30AM *  3 points [-]

You're right that it would indeed be a mistake to say "things are already great, let's stop here". But then, "things are really awful, so let's get better" doesn't sound quite right either. The attitude I would lean towards, and which I think is compatible with the quote, is "things are already pretty awesome, how could we make them even more awesome?".

Comment author: DSherron 02 July 2013 02:02:23PM 4 points [-]

The ideal attitude for humans with our peculiar mental architecture probably is one of "everything is amazing, also lets make it better" just because of how happiness ties into productivity. But that would be the correct attitude regardless of the actual state of the world. There is no such thing as an "awesome" world state, just a "more awesome" relation between two such states. Our current state is beyond the wildest dreams of some humans, and hell incarnate in comparison to what humanity could achieve. It is a type error to say "this state is awesome;" you have to say "more awesome" or "less awesome" compared to something else.

Also, such behavior is not compatible with the quote. The quote advocates ignoring real suboptimal sections of the world and instead basking how much better the world is than it used to be. How are you supposed to make the drinks better if you're not even allowed to admit they're not perfect? I could, with minor caveats, get behind "things are great lets make them better" but that's not what the quote said. The quote advocates pretending that we've already achieved perfection.

Comment author: dspeyer 02 July 2013 03:01:24AM 2 points [-]

I'm not saying we should settle for anything. Certainly not.

But to forget the awesomeness that already exists is a mistake with consequences. When looking at the big picture, it's important to realize that our current tradjectory is upwards. When planning for something like space travel, it's important to remember that air travel sounded just as crazy a hundred years ago. And when thinking about thinking, it's worth remembering that this same effect will hit whatever awesome thing we think of next.

Comment author: DSherron 02 July 2013 01:52:00PM 1 point [-]

Sure, I agree with that. But you see, that's not what the quote said. It actually not even related to what the quote said, except in very tenuous manners. The quote condemned people complaining about drinks on an airplane; that was the whole point of mentioning the technology at all. I take issue with the quote as stated, not with every somewhat similar-sounding idea.

Comment author: dspeyer 01 July 2013 08:20:30PM 23 points [-]

Sometimes the most remarkable things seem commonplace. I mean, when you think about it, jet travel is pretty freaking remarkable. You get in a plane, it defies the gravity of an entire planet by exploiting a loophole with air pressure, and it flies across distances that would take months or years to cross by any means of travel that has been significant for more than a century or three. You hurtle above the earth at enough speed to kill you instantly should you bump into something, and you can only breathe because someone built you a really good tin can that has seams tight enough to hold in a decent amount of air. Hundreds of millions of man-hours of work and struggle and research, blood, sweat, tears, and lives have gone into the history of air travel, and it has totally revolutionized the face of our planet and societies.

But get on any flight in the country, and I absolutely promise you that you will find someone who, in the face of all that incredible achievement, will be willing to complain about the drinks.

The drinks, people.

--Harry Dresden, Summer Knight, Jim Butcher

Comment author: DSherron 01 July 2013 11:58:16PM 15 points [-]

That honestly seems like some kind of fallacy, although I can't name it. I mean, sure, take joy in the merely real, that's a good outlook to have; but it's highly analogous to saying something like "Average quality of life has gone up dramatically over the past few centuries, especially for people in major first world countries. You get 50-90 years of extremely good life - eat generally what you want, think and say anything you want, public education; life is incredibly great. But talk to some people, I absolutely promise you that you will find someone who, in the face of all that incredible achievement, will be willing to complain about [starving kid in Africa|environmental pollution|dying peacefully of old age|generally any way in which the world is suboptimal]."

That kind of outlook not only doesn't support any kind of progress, or even just utility maximization, it actively paints the very idea of making things even better as presumptuous and evil. It does not serve for something to be merely awe-inspiring; I want more. I want to not just watch a space shuttle launch (which is pretty cool on its own), but also have a drink that tastes better than any other in the world, with all of my best friends around me, while engaged in a thrilling intellectual conversation about strategy or tactics in the best game ever created. While a wizard turns us all into whales for a day. On a spaceship. A really cool spaceship. I don't just want good; I want the best. And I resent the implication that I'm just ungrateful for what I have. Hell, what would all those people that invested the blood, sweat, and tears to make modern flight possible say if they heard someone suggesting that we should just stick to the status quo because "it's already pretty good, why try to make it better?" I can guarantee they wouldn't agree.

Comment author: TimS 01 July 2013 08:24:07PM 6 points [-]

Yes, like moving-the-goalposts, this is an annoying and dishonest rhetorical move.

Yes, even withing the Green movement, some people may be confused and misunderstand our beliefs, also our beliefs have evolved during time, but trust me that being Green is not about believing that the sky is literally green.

Suppose some Green says:

Yes, intellectual precursors to the current Green movement stated that the sky was literally Green. And they were less wrong, on the whole, then people who believed that the sky was blue. But the modern intellectual Green rejects that wave of Green-ish thought, and in part identifies the mistake as that wave of Greens being blue-ish in a way. In short, the Green movement of a previous generation made a mistake that the current wave of Greens rejects. Current Greens think we are less wrong than the previous wave of Greens.

Problematic, or reasonable non-mindkiller statement (attacking one's potential allies edition)?

How much of that intuition is driven by the belief that Bluism is correct. If we change the labels to Purple (some Blue) and Orange (no Blue), does the intuition change?

Comment author: DSherron 01 July 2013 08:51:00PM 3 points [-]

If, after realizing an old mistake, you find a way to say "but I was at least sort of right, under my new set of beliefs," then you are selecting your beliefs badly. Don't identify as a person who iwas right, or as one who is right; identify as a person who will be right. Discovering a mistake has to be a victory, not a setback. Until you get to this point, there is no point in trying to engage in normal rational debate; instead, engage them on their own grounds until they reach that basic level of rationality.

For people having an otherwise rational debate, they need to at this point drop the Green and Blue labels (any rationalist should be happy to do so, since they're just a shorthand for the full belief system) and start specifying their actual beliefs. The fact that one identifies as a Green or a Blue is a red flag of glaring irrationality, confirmed if they refuse to drop the label to talk about individual beliefs, in which case do the above. Sticking with the labels is a way to make your beliefs feel stronger, via something like a halo effect where every good thing about Green or Greens gets attributed to every one of your beliefs.

Comment author: ShardPhoenix 29 June 2013 11:40:09PM 1 point [-]

Poll for test takers:

Programming experience vs. whether you got the correct results (Here "experienced" means "professional or heavy user of programming" and "moderate" means "occasional user of programming"):

Did you think this was fair as a quick test?

Submitting...

Comment author: DSherron 01 July 2013 08:08:20PM 0 points [-]

Answered "moderate programmer, incorrect". I got the correct final answer but had 2 boxes incorrect. Haven't checked where I went wrong, although I was very surprised I had as back in grade school I got these things correct with near perfection. I learned programming very easily and have traditionally rapidly outpaced my peers, but I'm only just starting professionally and don't feel like an "experienced" programmer. As for the test, I suspect it will show some distinction but with very many false positives and negatives. There are too many uncovered aspects of what seems to make up a natural programmer. Also, it is tedious as hell, and I suspect that boredom will lead to recklessness will lead to false negatives, which aren't terrible but are still not good. May also lead to some selection effect.

Comment author: nshepperd 29 June 2013 03:11:03PM *  1 point [-]

Your calculations aren't quite right. You're treating EU(action) as though it were a probability value (like P(action)). EU(action) would be more logically written E(utility | action), which itself is an integral over utility * P(utility | action) for utility∈(-∞,∞), which, due to linearity of * and integrals, does have all the normal identities, like

E(utility | action) = E(utility | action, e) * P(e | action) + E(utility | action, ¬e) * P(¬e | action).

In this case, if you do expand that out, using p<<1 for the probability of an error, which is independent of your action, and assuming E(utility|action1,error) = E(utility|action2,error), you get E(utility | action) = E(utility | error) * p + E(utility | action, ¬error) * (1 - p). Or for the difference between two actions, EU1 - EU2 = (EU1' - EU2') * (1 - p) where EU1', EU2' are the expected utilities assuming no errors.

Anyway, this seems like a good model for "there's a superintelligent demon messing with my head" kind of error scenarios, but not so much for the everyday kind of math errors. For example, if I work out in my head that 51 is a prime number, I would accept an even odds bet on "51 is prime". But, if I knew I had made an error in the proof somewhere, it would be a better idea not to take the bet, since less than half of numbers near 50 are prime.

Comment author: DSherron 29 June 2013 05:29:38PM 0 points [-]

Right, I didn't quite work all the math out precisely, but at least the conclusion was correct. This model is, as you say, exclusively for fatal logic errors; the sorts where the law of non-contradiction doesn't hold, or something equally unthinkable, such that everything you thought you knew is invalidated. It does not apply in the case of normal math errors for less obvious conclusions (well, it does, but your expected utility given no errors of this class still has to account for errors of other classes, where you can still make other predictions).

Comment author: OccamsTaser 29 June 2013 12:16:38AM *  0 points [-]

Would you take the other side of my bet; having limitless resources, or a FAI, or something, would you be willing to bet losing it in exchange for a value roughly equal to that of a penny right now? In fact, you ought to be willing to risk losing it for no gain - you'd be indifferent on the bet, and you get free signaling from it.

Indeed, I would bet the world (or many worlds) that (A→A) to win a penny, or even to win nothing but reinforced signaling. In fact, refusal to use 1 and 0 as probabilities can lead to being money-pumped (or at least exploited, I may be misusing the term "money-pump"). Let's say you assign a 1/10^100 probability that your mind has a critical logic error of some sort, causing you to bound probabilities to the range of (1/10^100, 1-1/10^100) (should be brackets but formatting won't allow it). You can now be pascal's mugged if the payoff offered is greater than the amount asked for by a factor of at least 10^100. If you claim the probability is less than 10^100 due to a leverage penalty or any other reason, you are admitting that your brain is capable of being more certain than the aforementioned number (and such a scenario can be set up for any such number).

Comment author: DSherron 29 June 2013 07:26:39AM 2 points [-]

That's not how decision theory works. The bounds on my probabilities don't actually apply quite like that. When I'm making a decision, I can usefully talk about the expected utility of taking the bet, under the assumption that I have not made an error, and then multiply that by the odds of me not making an error, adding the final result to the expected utility of taking the bet given that I have made an error. This will give me the correct expected utility for taking the bet, and will not result in me taking stupid bets just because of the chance I've made a logic error; after all, given that my entire reasoning is wrong, I shouldn't expect taking the bet to be any better or worse than not taking it. In shorter terms: EU(action) = EU(action & ¬error) + EU(action & error); also EU(action & error) = EU(anyOtherAction & error), meaning that when I compare any 2 actions I get EU(action) - EU(otherAction) = EU(action & ¬error) - EU(otherAction & ¬error). Even though my probability estimates are affected by the presence of an error factor, my decisions are not. On the surface this seems like an argument that the distinction is somehow trivial or pointless; however, the critical difference comes in the fact that while I cannot predict the nature of such an error ahead of time, I can potentially recover from it iff I assign >0 probability to it occurring. Otherwise I will never ever assign it anything other than 0, no matter how much evidence I see. In the incredibly improbable event that I am wrong, given extraordinary amounts of evidence I can be convinced of that fact. And that will cause all of my other probabilities to update, which will cause my decisions to change.

Comment author: NickRetallack 29 June 2013 04:12:41AM 1 point [-]

I didn't come up with it. It's called the EPR Paradox.

Comment author: DSherron 29 June 2013 05:19:45AM *  0 points [-]

Neat. Consider my objection retracted. Although I suspect someone with more knowledge of the material could give a better explanation.

In response to Emotional Basilisks
Comment author: Eliezer_Yudkowsky 28 June 2013 11:04:38PM 7 points [-]

Would you kill babies if it was intrinsically the right thing to do? If not, under what other circumstances would you not do the right thing to do? If yes, how right would it have to be, for how many babies?

EDIT IN RESPONSE: My intended point had been that sometimes you do have to fight the hypothetical.

Comment author: DSherron 28 June 2013 11:44:48PM *  0 points [-]

This comment fails to address the post in any way whatsoever. No claim is made of the "right" thing to do; a hypothetical is offered, and the question asked is "what do you do?" It is not even the case that the hypothetical rests on an idea of an intrinsic "right thing" to do, instead asking us to measure how much we value knowing the truth vs happiness/lifespan, and how much we value the same for others. It's not an especially interesting or original question, but it does not make any claims which are relevant to your comment.

EDIT: That does make more sense, although I'd never seen that particular example used as "fighting the hypothetical", more just that "the right thing" is insufficiently defined for that sort of thing. Downvote revoked, but it's still not exactly on point to me. I also don't agree that you need to fight the hypothetical this time, other than to get rid of the particular example.

In response to Emotional Basilisks
Comment author: DSherron 28 June 2013 11:38:32PM 3 points [-]

While I don't entirely think this article was brilliant, it seems to be getting downvoted in excess of what seems appropriate. Not entirely sure why that is, although a bad choice of example probably helped push it along.

To answer the main question: need more information. I mean, it depends on the degree to which the negative effects happen, and the degree to which it seems this new belief will be likely to have major positive impacts on decision-making in various situations. I would, assuming I'm competent and motivated enough, create a secret society which generally kept the secret but spread it to all of the world's best and brightest, particularly in fields where knowing the secret would be vital to real success. I would also potentially offer a public face of the organization, where the secret is openly offered to any willing to take on the observed penalties in exchange for the observed gains. It could only be given out to those trusted not to tell, of course, but it should still be publicly offered; science needs to know, even if not every scientist needs to know.

View more: Prev | Next