Comment author: Wei_Dai 25 February 2010 03:57:01PM *  3 points [-]

I disagree that it's a more elegant solution. Suppose I say "While on vacation with a bunch of friends, Chris lost their money." I bet almost everyone would interpret "their" to mean "Chris and friends'" instead of "Chris's". Even when the meaning can be correctly deduced from context, using "they" in place of "he or she" as a singular referring pronoun would probably cause a significant delay in reading as the reader tries to figure out what "they" might be referring to, and whether it's an unintentional error.

In communities of people who prefer not to use either "he" or "she" to refer to themselves, they can set whatever community-specific rules they want. I have no objection to using "they" in that context, but it doesn't seem like a good general solution for the problem of unknown genders.

Comment author: nolrai 25 February 2010 04:44:26PM 2 points [-]

Natural languages are full of ambiguity, and yes that use sounds wrong cause your talking about a particular person.

And if you really wanted to say that it was Chris's money, how about "Chris lost Chris's money." It sounds awkward to me cause my English only allows use of they in the singular if it is an abstract person, not a particular real person.

I mean its not like "Chris lost his money" is unambiguous, it is not at all clear to me weather the he refers to Chris, or someone else. That would probably be clear in discourse because of context.

Comment author: nolrai 17 February 2010 07:17:24PM 8 points [-]

I really wonder how this sort of result applies to cultures that don't expect everyone to have high self-esteem. Such as say japan.

Comment author: Morendil 05 February 2010 06:42:14PM 3 points [-]

From professional experience (I've been a programmer since the 80's and was paid for it from the 90's onward) I agree with you entirely re. graphical representation. That doesn't keep generation after generation of tool vendors crowing that thanks to their new insight, programming will finally be made easy thanks to "visual this, that or the other". UML being the latest such to have a significant impact.

You have me pondering what we might gain from whipping up a Domain-Specific Language (say, in a DSL-friendly base language such as Ruby) to represent arguments in. It couldn't be too hard to bake some basics of Bayesian inference into that.

Comment author: nolrai 05 February 2010 08:06:10PM -1 points [-]

Well visual programing of visual things, is good. but thats just WYSIWYG.

Comment author: Wei_Dai 01 February 2010 12:22:10PM *  1 point [-]

Actually, I guess it could be a bit less clear if you're not already used to thinking of all math as being about theorems derived from axioms which are premise-conclusion links

But that's not all that math is. Suppose we eventually prove that P!=NP. How did we pick the axioms that we used to prove it? (And suppose we pick the wrong axioms. Would that change the fact that P!=NP?) Why are we pretty sure today that P!=NP without having a chain of premise-conclusion links? These are all parts of math; they're just parts of math that we don't understand.

ETA: To put it another way, if you ask someone who is working on the P!=NP question what he's doing, he is not going to answer that he is trying to determine whether a specific set of axioms proves or disproves P!=NP. He's going to answer that he's trying to determine whether P!=NP. If those axioms don't work out, he'll just pick another set. There is a sense that the problem is about something that is not identified by any specific set of axioms that he happens to hold in his brain, that any set of axioms he does pick is just a map to a territory that's "out there". But according to your meta-ethics, there is no "out there" for morality. So why does it deserve to be called realism?

Perhaps more to the point, do you agree that there is a coherent meta-ethical position that does deserve to be called moral realism, which asserts that moral and meta-moral computations are about something outside of individual humans or humanity as a whole (even if we're not sure how that works)?

Comment author: nolrai 02 February 2010 10:44:20PM 0 points [-]

Properly no they are not part of math, they are part of Computer Science, i.e. a description of how computations actually happen in the real world.

That is the missing piece that determines what axioms to use.

Comment author: Eliezer_Yudkowsky 31 January 2010 11:14:15PM 1 point [-]

No. See other replies.

Comment author: nolrai 02 February 2010 10:22:59PM 3 points [-]

See I think you miss understanding his response. I mean that is the only way I can interpret it to make sense.

Your insistence that it is not the right interpretation is very odd. I get that you don't want to trigger peoples cooperation instincts, but thats the only framework in which talking about other beings makes sense.

The morality you are talking about is the human-now-extended morality, (well closer to the less-wrong-now-extended morality) in that it is the morality that results from extending from the values humans currently have. Now you seem to have a categorization that need to categorize your own morality as different from others in order to feel right about imposing it? So you categorize it as simply morality, but your morality is is not necessarily my morality and so that categorization feels iffy to me. Now its certainly closer to mine then to the baby eaters, but I have no proof it is the same. Calling it simply Morality papers over this.

Comment author: nolrai 03 May 2009 07:42:28PM 0 points [-]

I think I have a co-operation instinct that is pushing me towards the supper happy future.

It feels better, but is probably not what I would do In real life. or I am more different then others then I give credit for.

Comment author: nolrai 30 April 2009 10:46:00PM 0 points [-]

either I would become incapable of any action or choice, or I wouldn't change at all, or I would give up the abstract goals and gradually reclaim the concrete ones.

In response to Failed Utopia #4-2
Comment author: nolrai 26 April 2009 08:34:00PM 1 point [-]

You know I cant help but read this a victory for humanity. Not a full victory, but i think the probability of some sort of interstellar civilization that isn't a dystopia is is higher afterwords then before, if nothing else we are more aware of the dangers of AI, and anything that does that and leaves a non-dystopian civilization capable of makeing useful AI is mostlikely a good thing by my utility function.

One thing that does bug me is I do not value happiness as much as most people do. Maybe I'm just not as empathetic as most people? I mean I acutely hope that humanity is replaced by a decenent civilisation/spieces that still values Truth ans Beauty, I care a lot more weather they are successful then if they are happy.

I wonder how much of the variance in preference between this and others could be explained by weather they are single (i.e I don't have some one they love to the point of "I don't want to consider even trying to live with someone else") vs. those that do.

I would take it, I imagine I would be very unhappy for a few months. (It feels like it would take years but thats a well known bias).

I assume "verthandi" is also not a coincidence. "verthandi"