When I read this:
9) To want to be the best in something has absolutely no precedence over doing something that matters.
I immediately thought of this.
On a more serious note, I have the impression that while some people (with conservative values?) do agree that doing something that matters is more important than anything else (although "something that matters" is usually something not very interesting), most creatively intelligent people go through their lives trying to optimize fun. And while it's certainly fun to hang out with people smarter...
I've always wanted a name like that!
But I'm worried that with such a generic English name people will expect me to speak perfect English, which means they'll be negatively surprised when they hear my noticeable accent.
In my opinion, this second question is far from being as important as the first one. Also, please see these posting guidelines:
...These traditionally go in Discussion:
- a link with minimal commentary
- a question or brainstorming opportunity for the Less Wrong community
Beyond that, here are some factors that suggest you should post in Main:
- Your post discusses core Less Wrong topics.
- The material in your post seems especially important or useful.
- You put a lot of thought or effort into your post. (Citing studies, making diagrams, and agonizing over wording
...The inhabitants of Florence in 1494 or Athens in 404 BCE could be forgiven for concluding that optimism just isn't factually true. For they knew nothing of such things as the reach of explanations or the power of science or even laws of nature as we understand them, let alone the moral and technological progress that was to follow when the Enlightenment got under way. At the moment of defeat, it must have seemed at least plausible to the formerly optimistic Athenians that the Spartans might be right, and to the formerly optimistic Florentines that Savonar
I'm quite sure I'm not rounding when I prefer hearing a Wagner opera to hearing any number of folk dance tunes, and when I prefer reading a Vernor Vinge novel to hearing any number of Wagner operas. See also this comment for another example.
It seems, lexicographic preferences arise when one has a choice between qualitatively different experiences. In such cases, any differences in quantity, however vast, are just irrelevant. An experience of long unbearable torture cannot be quantified in terms of minor discomforts.
I've always thought the problem with real world is that we cannot really optimize for anything in it, exactly because it is so messy and entangled.
I seem to have lexicographic preferences for quite a lot of things that cannot be sold, bought, or exchanged. For example, I would always prefer having one true friend to any number of moderately intelligent ardent followers. And I would always prefer a FAI to any number of human-level friends. It is not a difference in some abstract "quantity of happiness" that produces such preferences, those are qua...
It is not a trivial task to define a utility function that could compare such incomparable qualia.
However, it is possible for preferences not to be representable by a utility function. An example is lexicographic preferences which are not continuous and cannot be represented by a continuous utility function.
Has it been shown that this is not the case for dust specks and torture?
I'm a bit confused with this torture vs. dust specks problem. Is there an additive function for qualia, so that they can be added up and compared? It would be interesting to look at the definition of such a function.
Edit: removed a bad example of qualia comparison.
With its low probability, it doesn't significantly contribute to expected utility, so for decision making purposes it's an irrelevant hypothetical.
Well, this sounds right, but seems to indicate some problem with decision theory. If a cat has to endure 10 rounds of Schrödinger's experiments with 1/2 probability of death in each round, there should be some sane way for the cat to express its honest expectation to observe itself alive in the end.
Perhaps at this point you can argue, that you totally expect this mental pattern to be reattached to some set of memories or some personality outside the Matrix.
There should be some universes in which the simulators will perform a controlled procedure specifically designed for saving me. This includes going to all the trouble of reattaching what's left of me to all my best parts and memories retrieved from an adequate backup.
Of course, it is possible that the simulators will attach some completely arbitrary memories to my poor degraded personality. This...
since it's known with great certainty that there is no afterlife, the hypothetical isn't worth mentioning
I'm convinced that the probability of experiencing any kind of afterlife in this particular universe is extremely small. However, some versions of us are probably now living in simulations, and it is not inconceivable that some portion of them will be allowed to live "outside" their simulations after their "deaths". Since one cannot feel one's own nonexistence, I totally expect to experience "afterlife" some day.
considering that the dangers of technology might outweigh the risks.
This should probably read "might outweigh the benefits".
We don't have to attract everyone. We should just make sure that the main page does not send away people who would have stayed if they were exposed to some other LW stuff instead.
That's a good point. However, I think there is not much we can do about it by refining the main page. More precisely, I doubt that even a remotely interested in rationality and intelligent person can leave "a community blog devoted to refining the art of human rationality" without at least taking a look at some of the blog posts, irrespective of the contents of the ma...
And they aren't even regular pentagons! So, it's all real then...
Thanks for making me understand something extremely important with regard to creative work: Every creator should have a single, identifiable victim of his creations!
B: BECAUSE IT IS THE LAW.
I cannot imagine a real physicist saying something like that. Sounds more like a bad physics teacher... or a good judge.
To me, that sounds like just about every physics teacher I've ever spoken to (for cases where I was aware that they were a physics teacher).
I remember once going around to look for them so that one of them could finally tell me where the frak gravity gets its power source. I got so many appeals to authority and confused or borked responses, and a surprisingly high number of password guesses (sometimes more than one guess per teacher - beat that!). One of them just pointed me to the equations and said "Shut up and plug the variables" (in retrospect, that was probably the best response of the lot).
Basically, if you want to study physics, don't come to Canada.
But humans are crazy! Aren't they?
All existence is intrinsically meaningless. After the Singularity, there will be no escape from the fate of the rat with the pleasure button. No FAI, however Friendly, will be able to work around this irremediable property of the Universe except by limiting the intelligence of people and making them go through their eternal lives in carefully designed games. (> 95%)
Also, any self-aware AI with sufficient intelligence and knowledge will immediately self-destruct or go crazy. (> 99.9%)
Of course, another problem (and that's a huge one) is that our head does not really care much about our goals. The wicked organ will happily do anything that benefits our genes, even if it leaves us completely miserable.
One problem with this equation is that it dooms us to use hyperbolic discounting (which is dynamically inconsistent), not exponential discounting, which would be rational (given rationally calibrated coefficients).
The powers of instrumental rationality in the context of rapid technological progress and the inability/unwillingness of irrational people to listen to rational arguments strongly suggest the following scenario:
After realizing that turning a significant portion of the general population into rationalists would take much more time and resources than simply taking over the world, rationalists will create a global corporation with the goal of saving the humankind from the clutches of zero- and negative-sum status games.
Shortly afterwards, the Rational Megacorp will indeed take over the world and the people will get good government for the first time in the history of the human race (and will live happily ever after).
Foundation for Human Sapience (or Foundation for Advanced Sapience)
Reality Transplantation Center
Thoughtful Organization
CORTEX - Center for Organized Rational Thinking and EXperimentation
OOPS - Organization for Optimal Perception Seekers
BAYES - Bureau for Advancing Yudkowsky's Experiments in Sanity
I agree. The waterline metaphor is not so commonly known outside LW that it would evoke anything except some watery connotations.
So, what about a nice-looking acronym like "Truth, Rationality, Universe, Eliezer"? :)
Wikipedia is accessible if you disable JavaScript (or use a mobile app, or just Google cache).
I would prefer this comment to be more like 0
Does your preference mean that you honestly think the intrinsic value of the comment does not justify its vote count, or that you just generally prefer moderation and extremes irritate you?
In the former case, I would definitely vote toward what I thought would be a more justified vote count. Though in the latter case, I would probably be completely blind to my bias.
If ambiguity aversion is a paradox and not just a cognitive bias, does this mean that all irrational things people systematically do are also paradoxes?
What particular definition of "paradox" are you using? E.g, which one of the definitions in the Paradox wikipedia article?
Sod off! Overt aggression is a pleasant relief compared to the subtle, catty 'niceness' that the most competitive humans excel at.
Hmm... Doesn't this look like something an aggressive alpha male would say?..
Uh-oh!
So the true lesson of this post is that we should get rid of all the aggressive alpha males in our society. I guess I always found the idea obvious, but now that it has been validated, can we please start devising some plan for implementing it?
So the true lesson of this post is that we should get rid of all the aggressive alpha males in our society. I guess I always found the idea obvious, but now that it has been validated, can we please start devising some plan for implementing it?
Sod off! Overt aggression is a pleasant relief compared to the subtle, catty 'niceness' that the most competitive humans excel at. Only get rid of aggressive alpha males who act out violently (ie. those without sufficient restraint to abide by laws.)
Are there any general-purpose anti-akrasia gamification social sites? I recently found Pomodorium but it is regrettably single-player.
If you mean the less-fun-to-work-with part, it's fairly obvious. You have a good idea, but the smarter person A has already thought about it (and rejected it after having a better idea). You manage to make a useful contribution, and it is immediately generalized and improved upon by the smarter persons B and C. It's like playing a game where you have almost no control over the outcome. This problem seems related to competence and autonomy, which are two of the three basic needs involved in intrinsic motivation.
If you mean the issue of why fun is valued mor... (read more)