Comment author: Richard_Hollerith 02 October 2008 11:35:00AM 0 points [-]

OK, my previous comment was too rude. I won't do it again, OK?

Rather than answer your question about fitness, let me take back what I said and start over. I think you and I have different terminal values.

I am going to assume -- and please correct me if I am wrong -- that you assign an Everett branch in which you painless wink out of existence a value of zero (neither desirable or undesirable) and that consequently, under certain circumstances (e.g., at least one alternative Everett branch remains in which you survive) you would prefer painlessly winking out of existence to enduring pain.

My objection to this talk of destroying the universe in response to a terrorism incident, etc, is that the people whose terminal values are served by that outcome (such as, I am assuming, you) share the universe with people whose terminal values assign a negative value to that outcome (such as me). By using this method of increasing your utility you impose severe negative utility on me.

Note that if you engage in ordinary quantum suicide then my circumstances remain materially the same in both Everett branches, and the objection I just described does not apply.

Comment author: Richard_Hollerith 01 October 2008 01:15:00PM 0 points [-]

At some point the most profitable avenue of research in the pursuit of friendly AI would become the logistics of combining a mechanism for quantum suicide with a random number generator.

Usually learning new true information increases a person's fitness, but learning about the many-worlds interpretation seems to decrease the fitness of many who learn it.

Comment author: Richard_Hollerith 30 September 2008 05:18:00PM 0 points [-]

Whoever (E or Friedman) chose the title, "Prediction vs. Explanation", was probably thinking along the same lines.

Comment author: Richard_Hollerith 30 September 2008 05:14:00PM 0 points [-]

The way science is currently done, experimental data that the formulator of the hypothesis did not know about is much stronger evidence for a hypothesis than experimental data he did know about.

A hypothesis formulated by a perfect Bayesian reasoner would not have that property, but hypotheses from human scientists do, and I know of no cost-effective way to stop human scientists from generating the effect. Part of the reason human scientists do it is because the originator of a hypothesis is too optimistic about the hypothesis (and this optimism stems in part from the fact that being known as the originator of a successful hypothesis is very career-enhancing), and part of the reason is because a scientist tends to stop searching for hypotheses once he has one that fits the data (and I believe this has been called motivated stopping on this blog).

Most of the time, these human biases will swamp the other considerations (except that consideration mentioned below) mentioned so far in these comments. Consequently, the hypothesis advanced by Scientist 1 is more probable.

Someone made a very good comment to the effect that Scientist 1 is probably making better use of prior information. It might be the case that that is another way of describing the effect I have described.

Comment author: Richard_Hollerith 22 September 2008 04:49:00AM 0 points [-]

in a previous [comment] in this thread I argued that one should be surprised by externally improbable survival, at least in the sense that it should make one increase the probability assigned to alternative explanations of the world that do not make survival so unlikely.

Simon, I think that the previous comment you refer to was the smartest thing anyone has said in this comment section. Instead of continuing to point out the things you got right, I hope you do not mind if I point out something you got wrong, namely,

Richard: your first criticism has too low an effect on the probability to be significant. I was of course aware that humanity could be wiped out in other ways but incorrectly assumed that commenters here would be smart enough to understand that it was a justifiable simplification.

It is not a justifiable simplification. A satisfactory answer to the question you were trying to answer should remain satisfactory even if other existential risks (e.g., a giant comet) are high. If other existential risks were high, would you just throw up your hands and say that the question you were trying to answer is unanswerable?

Again, I think your contributions to this comment thread were better than anyone else's. I hope you continue to contribute here.

Comment author: Richard_Hollerith 07 August 2008 10:48:00PM 0 points [-]

An unusually moderate and temperate exchange.

Comment author: Richard_Hollerith 07 August 2008 03:09:00PM 1 point [-]

I disagree with the last 2 comments.

Eliezer's priority has gradually shifted over the last 5 years or so from increasing his own knowledge to transmitting what he knows to others, which is exactly the behavior I would expect from someone with his stated goals who knows what he is doing.

Yes, he has suggested or implied many times that he expects to implement the intelligence explosion more or less by himself (and I do not like that) but ever since the Summer of AI his actions (particularly all the effort he has put into blogging and his references to 15-to-18-year-olds, which suggests that he has thought about the most effective audience to target with his blogging) strongly indicate that he understands that the best way for him to assist the singularitarian project at this time is to transmit what he knows to other.

The blog is exactly the choice of means of transmission of scientific knowledge I would expect from someone who knows what he is doing. Surely we can look past the fact the some crusty academics look down on the blog.

I know of no one who has been more effective than Eliezer over the last 8 years or so at transmitting knowledge to people with a high aptitude for math and science.

And the suggestion that Eliezer lacks discipline strikes me as extremely unlikely. Just because a person is extremely intelligent does not mean that it is easy for the person to acquire knowledge at the rate Eliezer has acquired knowledge or to become so effective at transmitting knowledge.

In response to The Opposite Sex
Comment author: Richard_Hollerith 12 July 2008 05:12:00AM -1 points [-]

I will probably have to stop reading this blog for a while because my life has gotten very tricky and precarious. I am still available for more personal communication with rationalists and scientific generalists especially those living in the Bay Area.

There have been 3 comments on this blog by men to the effect that sex is not that important or that the writer has given up on sex. Those comments suggest what I would consider a lack of sufficient respect for the importance of sex. I tend to believe that for a young man to learn how to have a satisfying and engaging sex life is about as important as obtaining an education or achieving economic security through working. In other words, it is primary.

If someone emails me that they want to read it, I might write more on this topic on my blog.

In response to Is Morality Given?
Comment author: Richard_Hollerith 07 July 2008 07:34:00PM 0 points [-]

It seems the ultimate confusion here is that we are talking about instrumental values . . . before agreeing on terminal values . . .

If we could agree on some well-defined goal, e.g. maximization of human happiness, we could much more easily theorize on whether a particular case of murder would benefit or harm that goal.

denis bider, under the CEV plan for singularity, no human has to give an unambiguous definition or enumeration of his or her terminal values before the launch of the seed of the superintelligence. Consequently, those who lean toward the CEV plan feel much freer to regard themselves as having hundreds of terminal values. Consequently, refraining from murder might easily be a terminal value for them.

Defn. "Murder" is killing under particular circumstances, e.g., not by uniformed soldiers during a war, not in self-defense, not by accident.

In response to Is Morality Given?
Comment author: Richard_Hollerith 07 July 2008 07:31:00PM 0 points [-]

My comment is not charitable enough towards the CEVists. I ask the moderator to delete it, I will now submit a replacement.

View more: Prev | Next