All of Wrongnesslessness's Comments + Replies

If you mean the less-fun-to-work-with part, it's fairly obvious. You have a good idea, but the smarter person A has already thought about it (and rejected it after having a better idea). You manage to make a useful contribution, and it is immediately generalized and improved upon by the smarter persons B and C. It's like playing a game where you have almost no control over the outcome. This problem seems related to competence and autonomy, which are two of the three basic needs involved in intrinsic motivation.

If you mean the issue of why fun is valued mor... (read more)

When I read this:

9) To want to be the best in something has absolutely no precedence over doing something that matters.

I immediately thought of this.

On a more serious note, I have the impression that while some people (with conservative values?) do agree that doing something that matters is more important than anything else (although "something that matters" is usually something not very interesting), most creatively intelligent people go through their lives trying to optimize fun. And while it's certainly fun to hang out with people smarter... (read more)

0diegocaleiro
I'd like to know why you think this is the case.

I've always wanted a name like that!

But I'm worried that with such a generic English name people will expect me to speak perfect English, which means they'll be negatively surprised when they hear my noticeable accent.

In my opinion, this second question is far from being as important as the first one. Also, please see these posting guidelines:

These traditionally go in Discussion:

  • a link with minimal commentary
  • a question or brainstorming opportunity for the Less Wrong community

Beyond that, here are some factors that suggest you should post in Main:

  • Your post discusses core Less Wrong topics.
  • The material in your post seems especially important or useful.
  • You put a lot of thought or effort into your post. (Citing studies, making diagrams, and agonizing over wording
... (read more)

The inhabitants of Florence in 1494 or Athens in 404 BCE could be forgiven for concluding that optimism just isn't factually true. For they knew nothing of such things as the reach of explanations or the power of science or even laws of nature as we understand them, let alone the moral and technological progress that was to follow when the Enlightenment got under way. At the moment of defeat, it must have seemed at least plausible to the formerly optimistic Athenians that the Spartans might be right, and to the formerly optimistic Florentines that Savonar

... (read more)
2FiftyTwo
I think he vastly overestimates the affect of optimism on technological development, vs say population size, disease levels and food supply.
-2Jayson_Virissimo
Or, more accurately, you and I would be non-existent and some other group of beings would be quasi-immortal.
-8[anonymous]
2Eugine_Nier
And yet they couldn't even defeat the Spartans or keep Savonarola from taking power.

I'm quite sure I'm not rounding when I prefer hearing a Wagner opera to hearing any number of folk dance tunes, and when I prefer reading a Vernor Vinge novel to hearing any number of Wagner operas. See also this comment for another example.

It seems, lexicographic preferences arise when one has a choice between qualitatively different experiences. In such cases, any differences in quantity, however vast, are just irrelevant. An experience of long unbearable torture cannot be quantified in terms of minor discomforts.

0TheOtherDave
It seems our introspective accounts of our mental processes are qualitatively different, then. I'm willing to take your word for it that your experience of long unbearable torture cannot be "quantified" in terms of minor discomforts. If you wish to argue that mine can't either, I'm willing to listen.

I've always thought the problem with real world is that we cannot really optimize for anything in it, exactly because it is so messy and entangled.

I seem to have lexicographic preferences for quite a lot of things that cannot be sold, bought, or exchanged. For example, I would always prefer having one true friend to any number of moderately intelligent ardent followers. And I would always prefer a FAI to any number of human-level friends. It is not a difference in some abstract "quantity of happiness" that produces such preferences, those are qua... (read more)

0benelliott
I don't won't to be rude, but your first example in particular looks like somewhere where its beneficial to signal lexicographic preferences. What do you mean you don't know how to optimise for this! If you want and FAI then donating to SIAI almost certainly does more good than nothing, (even if they aren't as effective as they could be they almost certainly don't have zero effectiveness, if you think they have negative effectiveness then you should be persuading others not to donate). Any time spent acquiring/spending time with true friends would be better spent on earning money to donate (or encouraging others not to) if your preferences are truly lexicographic. This is what I mean when I say that in the real world, lexicographic preferences just cache out as not caring about the bottom at all. You've also confused the issue by talking about personal preferences, which tend to be non-linear, rather than interpersonal. It may well be that the value of both ardent followers and true friends suffers diminishing returns as you get more of them, and probably tends towards an asymptote. The real question is not "do I prefer an FAI to any number of true friends" but "do I prefer a single true friend to any chance of an FAI, however small", in which case the answer, for me at least, seems to be no.

It is not a trivial task to define a utility function that could compare such incomparable qualia.

Wikipedia:

However, it is possible for preferences not to be representable by a utility function. An example is lexicographic preferences which are not continuous and cannot be represented by a continuous utility function.

Has it been shown that this is not the case for dust specks and torture?

5benelliott
In the real world, if you had lexicographic preferences you effectively wouldn't care about the bottom level at all. You would always reject a chance to optimise for it, instead chasing the tiniest epsilon chance of affecting the top level. Lexicographic preferences are sometimes useful in abstract mathematical contexts where they can clean up technicalities, but would be meaningless in the fuzzy, messy actual world where there's always a chance of affecting something.
1TheOtherDave
I'm not sure how one could show such a thing in a way that can plausibly be applied to the Vast scale differences posited in the DSvT thought experiment. When I try to come up with real-world examples of lexicographic preferences, it's pretty clear to me that I'm rounding... that is, X is so much more important than Y that I can in effect neglect Y in any decision that involves a difference in X, no matter how much Y there is relative to X, for any values of X and Y worth considering. But if someone seriously invites me to consider ludicrous values of Y (e.g., 3^^^3 dust specks), that strategy is no longer useful.

I'm a bit confused with this torture vs. dust specks problem. Is there an additive function for qualia, so that they can be added up and compared? It would be interesting to look at the definition of such a function.

Edit: removed a bad example of qualia comparison.

2Incorrect
They aren't adding qualia, they are adding the utility they associate with qualia.

With its low probability, it doesn't significantly contribute to expected utility, so for decision making purposes it's an irrelevant hypothetical.

Well, this sounds right, but seems to indicate some problem with decision theory. If a cat has to endure 10 rounds of Schrödinger's experiments with 1/2 probability of death in each round, there should be some sane way for the cat to express its honest expectation to observe itself alive in the end.

0Vladimir_Nesov
This kind of expectation is useful for planning actions that the surviving agent would perform, and indeed if the survival takes place, the updated probability (given the additional information that the agent did survive) of that hypothetical would no longer be low. But it's not useful for planning actions in the context where the probability of survival is still too low to matter. Furthermore, if the probability of survival is extremely low, even planning actions for that eventuality or considering most related questions is an incorrect use of one's time. So if we are discussing a decision that takes place before a significant risk, the sense of expectation that refers to the hypothetical of survival is misleading. See also this post: Preference For (Many) Future Worlds.
0[anonymous]
The primary purpose of decision theory is to determine good decisions, which is what I meant to refer to by saying "for decision making purposes". I don't see how "expressing honest expectation" in the sense of your example would contribute to the choice of decisions. More generally, this sense of "expectation" doesn't seem good for anything except for creating a mistaken impression that certain incredibly improbable hypotheticals matter somehow. See also: Preference For (Many) Future Worlds.

Perhaps at this point you can argue, that you totally expect this mental pattern to be reattached to some set of memories or some personality outside the Matrix.

There should be some universes in which the simulators will perform a controlled procedure specifically designed for saving me. This includes going to all the trouble of reattaching what's left of me to all my best parts and memories retrieved from an adequate backup.

Of course, it is possible that the simulators will attach some completely arbitrary memories to my poor degraded personality. This... (read more)

since it's known with great certainty that there is no afterlife, the hypothetical isn't worth mentioning

I'm convinced that the probability of experiencing any kind of afterlife in this particular universe is extremely small. However, some versions of us are probably now living in simulations, and it is not inconceivable that some portion of them will be allowed to live "outside" their simulations after their "deaths". Since one cannot feel one's own nonexistence, I totally expect to experience "afterlife" some day.

4Vladimir_Nesov
The word "expectation" refers to probability. When probability is low, as in tossing a coin 1000 times and getting "heads" each time, we say that the event is "not expected", even though it's possible. Similarly, afterlife is strictly speaking possible, but it's not expected in the sense that it only holds insignificant probability. With its low probability, it doesn't significantly contribute to expected utility, so for decision making purposes it's an irrelevant hypothetical.
0ArisKatsaris
I think you may be treating your continuation as a binary affair (you either exist or don't exist, you either experience or don't experience) as if "you" (your mind) were an ontologically simple entity. Let's say that in the vast majority of universes you "die" from an external perspective. This means that from an internal perspective, in the vast majority of universe you'll experience the degradation of your mental circuitry -- whether said degradation lasts ten years or one millisecond, you will experience said degradation up to the point you will no longer be able to experience anything. So let's say that at some point your mind is at a state where you're still sensing experiences, but don't form new memories, nor hold any old memories; and because you don't even have much of a short-term memory, your thinking doesn't get more complicated than "Fuzzy warmth. Nice" or perhaps "Pain. Hurts!". At this point, this experience is all you effectively are -- it's not as if this circuitry will be metaphysically connected to a single specific set of memories, or a single specific personality. Perhaps at this point you can argue, that you totally expect this mental pattern to be reattached to some set of memories or some personality outside the Matrix. And therefore it will experience an afterlife -- in a sense. But not necessarilly an afterlife with memories or personality that have anything to do with your present memories or personality, right? Quantum Immortality doesn't exist. At best one can hope for Quantum Reincarnation -- and even that requires certain unverified assumptions...

considering that the dangers of technology might outweigh the risks.

This should probably read "might outweigh the benefits".

We don't have to attract everyone. We should just make sure that the main page does not send away people who would have stayed if they were exposed to some other LW stuff instead.

That's a good point. However, I think there is not much we can do about it by refining the main page. More precisely, I doubt that even a remotely interested in rationality and intelligent person can leave "a community blog devoted to refining the art of human rationality" without at least taking a look at some of the blog posts, irrespective of the contents of the ma... (read more)

1Epiphany
Problem 1: If the person has already done work to refine their skill with rationality on their own, they may figure the blog is likely to be more of the same old stuff. LessWrong would love to have more of that type of person, don't you think? They're not going to be interested in the topic of rationality presented plainly all by itself. You have to immediately prove to them that the particular collection of information is fresh and exciting, or they slot you away in the "already read about it" category. Problem 2: A random post isn't necessarily going to be a good post. I did random post several times before I found one that excited me and made me realize that LW is awesome, and the ONLY reason why I kept trying was because so many friends had referred me to LW that I was starting to think I was missing something. Problem 3: A lot of users are busy and don't take their time with new sites. They usually will not read an entire front page. You have a few seconds to interest them. You either put something awesome in their face, or they're lost.

And they aren't even regular pentagons! So, it's all real then...

Thanks for making me understand something extremely important with regard to creative work: Every creator should have a single, identifiable victim of his creations!

1NancyLebovitz
As far as I can tell, unless I have very specific information about the person I'm writing to, I'm writing for myself as of the moment just before I had whatever insight I'm trying to convey. That could explain why I'm much more concerned with being clear than with being persuasive.

B: BECAUSE IT IS THE LAW.

I cannot imagine a real physicist saying something like that. Sounds more like a bad physics teacher... or a good judge.

DaFranker160

To me, that sounds like just about every physics teacher I've ever spoken to (for cases where I was aware that they were a physics teacher).

I remember once going around to look for them so that one of them could finally tell me where the frak gravity gets its power source. I got so many appeals to authority and confused or borked responses, and a surprisingly high number of password guesses (sometimes more than one guess per teacher - beat that!). One of them just pointed me to the equations and said "Shut up and plug the variables" (in retrospect, that was probably the best response of the lot).

Basically, if you want to study physics, don't come to Canada.

4atorm
D(redd): I AM THE LAW!

But humans are crazy! Aren't they?

0TimS
If we define crazy as "sufficiently mentally unusual as to be noticeably dysfunctional in society" then I estimate at least 50% of humanity is not crazy. If we define crazy as "sufficiently mentally unusual that they cannot achieve ordinary goals more than 70% of the time," then I estimate that at least 75% of humanity is not crazy.

All existence is intrinsically meaningless. After the Singularity, there will be no escape from the fate of the rat with the pleasure button. No FAI, however Friendly, will be able to work around this irremediable property of the Universe except by limiting the intelligence of people and making them go through their eternal lives in carefully designed games. (> 95%)

Also, any self-aware AI with sufficient intelligence and knowledge will immediately self-destruct or go crazy. (> 99.9%)

9Incorrect
I'm trying to figure out what this statement means. What would the universe look like if it were false?
1ArisKatsaris
This prediction isn't falsifiable -- the word "crazy" is not precise enough, and the word "sufficient" is a loophole you can drive the planet Jupiter through.
2TimS
Aren't you supposed to separate distinct predictions? Edit: don't see it in the rules, so remainder of post changed to reflect. I upvote the second prediction - the existence of self-aware humans seems evidence of overconfidence, at the very least.

Of course, another problem (and that's a huge one) is that our head does not really care much about our goals. The wicked organ will happily do anything that benefits our genes, even if it leaves us completely miserable.

6Viliam_Bur
Actually, the wicked organ often does things harmful for both our goals and genes, based on heuristics for an ancient environment that no longer exists. Or I am just too stupid to see how exactly does not living healthy, not expanding my social skills, and not making lot of money contribute to survival of my genes.

One problem with this equation is that it dooms us to use hyperbolic discounting (which is dynamically inconsistent), not exponential discounting, which would be rational (given rationally calibrated coefficients).

3Dmytry
Well, the heuristic has to encompass the decreasing ability to predict the future for larger times, which needs not be exponential if the risks do not stay constant.

The powers of instrumental rationality in the context of rapid technological progress and the inability/unwillingness of irrational people to listen to rational arguments strongly suggest the following scenario:

After realizing that turning a significant portion of the general population into rationalists would take much more time and resources than simply taking over the world, rationalists will create a global corporation with the goal of saving the humankind from the clutches of zero- and negative-sum status games.

Shortly afterwards, the Rational Megacorp will indeed take over the world and the people will get good government for the first time in the history of the human race (and will live happily ever after).

1hamnox
This sounds remarkably like my dream. But I figured that we'd take over some of the world, institute mandatory rationality training in that part, use our Nation of Rationalists to take over the rest of the world, and then go out and start colonizing space.
3Normal_Anomaly
I find it very unlikely that this will happen, mostly due to a lack of sufficiently effective rationalists with an interest in taking over the world directly and the moral fiber to provide good government once they do so. But I think it would be awesome.
  • Foundation for Human Sapience (or Foundation for Advanced Sapience)

  • Reality Transplantation Center

  • Thoughtful Organization

  • CORTEX - Center for Organized Rational Thinking and EXperimentation

  • OOPS - Organization for Optimal Perception Seekers

  • BAYES - Bureau for Advancing Yudkowsky's Experiments in Sanity

I agree. The waterline metaphor is not so commonly known outside LW that it would evoke anything except some watery connotations.

So, what about a nice-looking acronym like "Truth, Rationality, Universe, Eliezer"? :)

5PhilSchwartz
If there is concern that people outside of LW won't know the metaphor, then the name "Rationality Waterline" can be used at first with the goal of gaining enough recognition to move on to simply "Waterline" at a later date.
2bbarth
Seriously? This place already has a rep for being a personality cult. Let's not purposefully reinforce it. ;)

Wikipedia is accessible if you disable JavaScript (or use a mobile app, or just Google cache).

0gwern
If anyone is curious, I'm downvoting everyone in this thread - not only is this a terrible place to discuss SOPA and blackout circumventions (seriously, we can't wait a day and get on with our lives?), there's already a SOPA post in Discussion.
-8Anubhav

I would prefer this comment to be more like 0

Does your preference mean that you honestly think the intrinsic value of the comment does not justify its vote count, or that you just generally prefer moderation and extremes irritate you?

In the former case, I would definitely vote toward what I thought would be a more justified vote count. Though in the latter case, I would probably be completely blind to my bias.

1rocurley
I meant that the intrinsic value of the comment does not justify its vote count.

If ambiguity aversion is a paradox and not just a cognitive bias, does this mean that all irrational things people systematically do are also paradoxes?

What particular definition of "paradox" are you using? E.g, which one of the definitions in the Paradox wikipedia article?

0fool
Meh. It should not really affect what I've said or what I intend to say later if you substitute "Violation of the rules of probability" or "of utility" for "paradox" (Ellsberg and Allais resp.) However paradox is what they're generally called. And it's shorter.

Sod off! Overt aggression is a pleasant relief compared to the subtle, catty 'niceness' that the most competitive humans excel at.

Hmm... Doesn't this look like something an aggressive alpha male would say?..

Uh-oh!

9wedrifid
It's almost as though I responded to scheming to kill all people with the traits 'male' and 'aggressive' with benign aggression deliberately. For instance it could be that I would prefer to designate myself as part of the powerful group as opposed to the embittered group trying to scheme against them!

So the true lesson of this post is that we should get rid of all the aggressive alpha males in our society. I guess I always found the idea obvious, but now that it has been validated, can we please start devising some plan for implementing it?

3loup-vaillant
You might want to be extra-careful with your plan. Because, you know, power corrupts.
wedrifid120

So the true lesson of this post is that we should get rid of all the aggressive alpha males in our society. I guess I always found the idea obvious, but now that it has been validated, can we please start devising some plan for implementing it?

Sod off! Overt aggression is a pleasant relief compared to the subtle, catty 'niceness' that the most competitive humans excel at. Only get rid of aggressive alpha males who act out violently (ie. those without sufficient restraint to abide by laws.)

Are there any general-purpose anti-akrasia gamification social sites? I recently found Pomodorium but it is regrettably single-player.