All of Virge2's Comments + Replies

Virge200

Correction: I don't think it helps Bostrom's position to overload the concept of friendship friendly with the connotations of close friendship.

Virge200

Carl: This point is elementary. A “friend” who seeks to transform himself into somebody who wants to hurt you, is not your friend."

The switch from "friendly" (having kindly interest and goodwill; not hostile) to a "friend" (one attached to another by affection or esteem) is problematic. To me it radically distorts the meaning of FAI and makes this pithy little sound-bite irrelevant. I don't think it helps Bostrom's position to overload the concept of friendship with the connotations of close friendship.

Exactly how much human bi... (read more)

Virge200

Kip Werking: "P2. But, all we have to prove that giving to charity, etc., is right, is that everyone thinks it is"

You're stating that there exists no other way to prove that giving to charity is right. That's an omniscient claim.

Still, it's unlikely to be defeated in the space of a comment thread, simply because your sweeping generalization about the goodness of charity is far from being universally accepted. A very general claim like that, with no concrete scenario, no background information on where it is to be applied, makes relativism a fore... (read more)

Virge200

komponisto: "I'm really having trouble understanding how this isn't tantamount to moral relativism"

I think I see an element of confusion here in the definition of moral relativism. A moral relativist holds that "no universal standard exists by which to assess an ethical proposition's truth". However, the word universal in this context (moral philosophy) is only expected to apply to all possible humans, not all conceivable intelligent beings. (Of all the famous moral relativist philosophers, how many have addressed the morals of general ... (read more)

Virge220

Larry D'Anna: "And it doesn't do any good to say that they aren't defective. They aren't defective from a human, moral point of view, but that's not the point. From evolutions view, there's hardly anything more defective, except perhaps a fox that voluntarily restrains it's own breeding."

Why is it "not the point"? In this discussion we are talking about differences in moral computation as implemented within individual humans. That the blind idiot's global optimization strategy defines homosexuality as a defect is of no relevance.

Larr... (read more)

Virge250

Eliezer: "The basic ev-bio necessity behind the psychological unity of human brains is not widely understood."

I agree. And I think you've over-emphasized the unity and ignored evidence of diversity, explaining it away as defects.

Eliezer: "And even more importantly, the portion of our values that we regard as transpersonal, the portion we would intervene to enforce against others, is not all of our values; it's not going to include a taste for pepperoni pizza, or in my case, it's not going to include a notion of heterosexuality or homosexuali... (read more)

Virge220

Eliezer: "But this would be an extreme position to take with respect to your fellow humans, and I recommend against doing so. Even a psychopath would still be in a common moral reference frame with you, if, fully informed, they would decide to take a pill that would make them non-psychopaths. If you told me that my ability to care about other people was neurologically damaged, and you offered me a pill to fix it, I would take it."

How sure are you that most human moral disagreements are attributable to

  • lack of veridical information, or
  • lack of
... (read more)
Virge220

It seems that the Pebblesorting People had no problems with variations in spelling of their names. (Biko=Boki)

Good parable though, Eliezer.

Virge200

Imagine the year 2100

AI Prac Class Task: (a) design and implement a smarter-than-human AI using only open source components; (b) ask it to write up your prac report. Time allotted: 4 hours Bonus points: disconnect your AI host from all communications devices; place your host in a Faraday cage; disable your AI's morality module; find a way to shut down the AI without resorting to triggering the failsafe host self-destruct.

sophiesdad, since a human today could not design a modern microprocessor (without using the already-developed plethora of design tools) t... (read more)

0DanielLC
Trivial. Once you've disabled your AI's morality module, you've already shut it down.
Virge240

HA: "I aspire not to care about rescuing toddlers from burning orphanages. There seems to be good evidence they're not even conscious, self-reflective entities yet."

HA, do you think that only the burning toddler matters? Don't the carers from the orphanage have feelings? Will they not suffer on hearing about the death of someone they've cared for?

Overcoming bias does not mean discarding empathy. If you aspire to jettison your emotions, I wonder how you'll make an unbiased selection of which ones you don't need.

Virge284

Ian, there's nothing wrong with reductionism.

Overly simplistic reductionism is wrong, e.g., if you divide a computer into individual bits, each of which can be in one of two states, then you can't explain the operation of the computer in just the states of its bits. However, that reduction omitted an important part, the interconnections of the bits--how each affects the others. When you reduce a computer to individual bits and their immediate relationships with other bits, you can indeed explain the whole computer's operation, completely. (It just becomes ... (read more)

8Dojan
A car without its engine isn't very good for driving, and neather is the engine all by itself. But that doesn't mean anything magical happens when you put them together. But that doesn't mean you can put them together any which way.
Virge250

Eliezer, I guess the answer you want is that "science" as we know it has at least one bias: a bias to cling to pragmatic pre-existing explanations, even when they embody confused thinking and unnecessary complications. This bias appears to produce major inefficiencies in the process.

Viewing science as a search algorithm, it follows multiple alternate paths but it only prunes branches when the sheer bulk of experimental evidence clearly favours another branch, not when an alternate path provides a lower cost explanation for the same evidence. For efficiency, science should instead prune (or at least allocate resources) based on a fair comparison of current competing explanations.

Science has a nostalgic bias.

1velisar
The science world, as much as the rest of the "worlds" comprised by people who share something which everybody cherishes, has to have the status quo bias. (the enigmatic add-on: One cannot escape the feeling that there is such thing as time)
Virge220

Suggested reading: http://en.wikipedia.org/wiki/Visual_cortex#Function "Conceptually, this retinotopy mapping is a transformation of the visual image from retina to V1. The correspondence between a given location in V1 and in the subjective visual field is very precise: even the blind spots are mapped into V1."

We can easily reject the Cartesian theater notions, but there are still "paintings" of images in our brains.

Virge2150

"This pattern of belief is very hard to justify from a Bayesian perspective. It is just the same hypothesis in both cases. Even if, in the second case, I announce an experimental method and my intent to actually test it, I have not yet experimented and I have not yet received any observational evidence in favor of the hypothesis."

Always consider the source.

It is the same hypothesis, but if the person telling the story is unknown to me at the outset, then the credibility of the source rises with test plans, as follows.

After the just-so story: Th... (read more)

3zslastman
I underwent the exact change in confidence described in the above article and for pretty much those reasons, on reflection. The situation is an extremely common on in science. The only place you're wrong is in saying that the confidence update is necessarily small. If somebody has worked out a series of experiments to test an idea, it means they've thought about the idea and taken it seriously, and are willing to advertise this at the risk of some of their credibility. You'd assign it a higher prior to such an idea than an off-hand remark Not even counting the demonstration of capability it provides.