Comment author: diegocaleiro 28 May 2013 08:06:18PM 0 points [-]

I'd like to know why you think this is the case.

Comment author: Wrongnesslessness 29 May 2013 06:55:23AM 2 points [-]

If you mean the less-fun-to-work-with part, it's fairly obvious. You have a good idea, but the smarter person A has already thought about it (and rejected it after having a better idea). You manage to make a useful contribution, and it is immediately generalized and improved upon by the smarter persons B and C. It's like playing a game where you have almost no control over the outcome. This problem seems related to competence and autonomy, which are two of the three basic needs involved in intrinsic motivation.

If you mean the issue of why fun is valued more than doing something that matters, it is less clear. My guess is that's because boredom is a more immediate and pressing concern than meaningless existence (where "something that matters" is a cure for meaningless existence, and "fun" is a cure for boredom). Smart people also seem to get bored more easily, so the need to get away from boredom is probably more important for them.

Comment author: Wrongnesslessness 28 May 2013 07:09:02AM 2 points [-]

When I read this:

9) To want to be the best in something has absolutely no precedence over doing something that matters.

I immediately thought of this.

On a more serious note, I have the impression that while some people (with conservative values?) do agree that doing something that matters is more important than anything else (although "something that matters" is usually something not very interesting), most creatively intelligent people go through their lives trying to optimize fun. And while it's certainly fun to hang out with people smarter than you and learn from them, it's much less fun to work with them.

Comment author: Tuxedage 11 May 2013 04:33:53PM *  58 points [-]

So I've recently decided to change my real name from an oriental one to John Adams. I am not white.

There’s a significant amount of evidence that shows that

(1) Common names have better reception in many areas, especially publication and job interviews.

(2) White names do significantly better than non-white names

(3) Last names that begin with the early letters of the alphabet have a significant advantage over last names beginning with the latter letters of the alphabet.

Source :

http://www.ncbi.nlm.nih.gov/pubmed/19020207 http://blog.simplejustice.us/files/66432-58232/SSQUKalistFinal.pdf http://ideas.repec.org/p/hhs/sunrpe/2006_0013.html http://www.nber.org/papers/w9873.pdf?new_window=1 http://www.nber.org/digest/sep03/w9873.html

Therefore if I were to use "John", one of the most common 'white' first names, along with Adams, a 'white' surname that also begins with the letter A, it should stand that I would be conferred a number of advantages.

Furthermore, I have very little attachment to my family heritage. Switching names doesn’t cost me anything beyond a minor inconvenience of having to do paperwork. For some people, changing your name may be extremely worthwhile, depending on your current name, and how attached you are to it. At least, it may be worthwhile to consider it, and depending on the person, may be a very cheap optimization with significant benefits.

Comment author: Wrongnesslessness 13 May 2013 10:30:26AM 5 points [-]

I've always wanted a name like that!

But I'm worried that with such a generic English name people will expect me to speak perfect English, which means they'll be negatively surprised when they hear my noticeable accent.

CFAR and SI MOOCs: a Great Opportunity

13 Wrongnesslessness 13 November 2012 10:30AM

Massive open online courses seem to be marching towards total world domination like some kind of educational singularity (at least in the case of Coursera). At the same time, there are still relatively few courses available, and each new added course is a small happening in the growing MOOC community.

Needless to say, this seems like a perfect opportunity for SI and CFAR to advance their goals via this new education medium. Some people seem to have already seen the potential and taken advantage of it:

One interesting trend that can be seen is companies offering MOOCs to increase the adoption of their tools/technologies. We have seem this with 10gen offering Mongo courses and to a lesser extent with Coursera’s ‘Functional Programming in Scala’ taught by Martin Odersky

(from the above link to the Class Central Blog)

 

So the question is, are there any online courses already planned by CFAR and/or SI? And if not, when will it happen?

 

Edit: This is not a "yes or no" question, albeit formulated as one. I've searched the archives and did not find any mention of MOOCs as a potentially crucial device for spreading our views. If any such courses are already being developed or at least planned, I'll be happy to move this post to the open thread, as some have requested, or delete it entirely. If not, please view this as a request for discussion and brainstorming.

P.S.: Sorry, I don't have the time to write a good article on this topic.

Comment author: Wrongnesslessness 05 November 2012 09:53:16AM 21 points [-]

The inhabitants of Florence in 1494 or Athens in 404 BCE could be forgiven for concluding that optimism just isn't factually true. For they knew nothing of such things as the reach of explanations or the power of science or even laws of nature as we understand them, let alone the moral and technological progress that was to follow when the Enlightenment got under way. At the moment of defeat, it must have seemed at least plausible to the formerly optimistic Athenians that the Spartans might be right, and to the formerly optimistic Florentines that Savonarola might be. Like every other destruction of optimism, whether in a whole civilization or in a single individual, these must have been unspeakable catastrophes for those who had dared to expect progress. But we should feel more than sympathy for those people. We should take it personally. For if any of those earlier experiments in optimism had succeeded, our species would be exploring the stars by now, and you and I would be immortal.

David Deutsch, The Beginning of Infinity

Comment author: TheOtherDave 08 September 2012 05:48:55PM 1 point [-]

I'm not sure how one could show such a thing in a way that can plausibly be applied to the Vast scale differences posited in the DSvT thought experiment.

When I try to come up with real-world examples of lexicographic preferences, it's pretty clear to me that I'm rounding... that is, X is so much more important than Y that I can in effect neglect Y in any decision that involves a difference in X, no matter how much Y there is relative to X, for any values of X and Y worth considering.

But if someone seriously invites me to consider ludicrous values of Y (e.g., 3^^^3 dust specks), that strategy is no longer useful.

Comment author: Wrongnesslessness 09 September 2012 05:36:45AM 0 points [-]

I'm quite sure I'm not rounding when I prefer hearing a Wagner opera to hearing any number of folk dance tunes, and when I prefer reading a Vernor Vinge novel to hearing any number of Wagner operas. See also this comment for another example.

It seems, lexicographic preferences arise when one has a choice between qualitatively different experiences. In such cases, any differences in quantity, however vast, are just irrelevant. An experience of long unbearable torture cannot be quantified in terms of minor discomforts.

Comment author: benelliott 08 September 2012 07:05:31PM 3 points [-]

In the real world, if you had lexicographic preferences you effectively wouldn't care about the bottom level at all. You would always reject a chance to optimise for it, instead chasing the tiniest epsilon chance of affecting the top level. Lexicographic preferences are sometimes useful in abstract mathematical contexts where they can clean up technicalities, but would be meaningless in the fuzzy, messy actual world where there's always a chance of affecting something.

Comment author: Wrongnesslessness 09 September 2012 05:24:21AM 0 points [-]

I've always thought the problem with real world is that we cannot really optimize for anything in it, exactly because it is so messy and entangled.

I seem to have lexicographic preferences for quite a lot of things that cannot be sold, bought, or exchanged. For example, I would always prefer having one true friend to any number of moderately intelligent ardent followers. And I would always prefer a FAI to any number of human-level friends. It is not a difference in some abstract "quantity of happiness" that produces such preferences, those are qualitatively different life experiences.

Since I do not really know how to optimize for any of this, I'm not willing to reject human-level friends and even moderately intelligent ardent followers that come my way. But if I'm given a choice, it's quite clear what my choice will be.

Comment author: Incorrect 08 September 2012 04:38:17PM 1 point [-]

They aren't adding qualia, they are adding the utility they associate with qualia.

Comment author: Wrongnesslessness 08 September 2012 05:13:01PM 0 points [-]

It is not a trivial task to define a utility function that could compare such incomparable qualia.

Wikipedia:

However, it is possible for preferences not to be representable by a utility function. An example is lexicographic preferences which are not continuous and cannot be represented by a continuous utility function.

Has it been shown that this is not the case for dust specks and torture?

Comment author: Wrongnesslessness 08 September 2012 04:35:18PM *  0 points [-]

I'm a bit confused with this torture vs. dust specks problem. Is there an additive function for qualia, so that they can be added up and compared? It would be interesting to look at the definition of such a function.

Edit: removed a bad example of qualia comparison.

Comment author: Vladimir_Nesov 06 September 2012 06:27:39PM *  4 points [-]

I totally expect to experience "afterlife" some day

The word "expectation" refers to probability. When probability is low, as in tossing a coin 1000 times and getting "heads" each time, we say that the event is "not expected", even though it's possible. Similarly, afterlife is strictly speaking possible, but it's not expected in the sense that it only holds insignificant probability. With its low probability, it doesn't significantly contribute to expected utility, so for decision making purposes it's an irrelevant hypothetical.

Comment author: Wrongnesslessness 07 September 2012 04:44:40AM 0 points [-]

With its low probability, it doesn't significantly contribute to expected utility, so for decision making purposes it's an irrelevant hypothetical.

Well, this sounds right, but seems to indicate some problem with decision theory. If a cat has to endure 10 rounds of Schrödinger's experiments with 1/2 probability of death in each round, there should be some sane way for the cat to express its honest expectation to observe itself alive in the end.

View more: Next