Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Formative Youth
Comment author: Richard_Hollerith2 01 March 2009 12:44:53PM 0 points [-]

For someone to use these pages to promote their online store would be bad, obviously.

But it is natural for humans to pursue fame, reputation, adherents and followers as arduously as humans pursue commercial profit.

And the pursuit of these things can detract from a public conversation for the same reason that pursuit of commericial profit can.

And of course a common component of a bid for fame, reputation, adherents or followers is claims of virtues.

I am not advocating as a standard the avoidance of all claims of virtues because sometimes they are helpful.

But a claim of a virtue when there is no way for the reader to confirm the presence of the virtue seems to have all the bad effects of such a claim without any of the good effects.

Altruism is not about sacrifice. It is not even about avoiding self-benefit.

I think sacrifice and avoiding self-benefit came up in this conversation because they are the usual ways in which readers confirm claims of altruistic virtue.

In response to Formative Youth
Comment author: Richard_Hollerith2 26 February 2009 02:05:07AM 0 points [-]

How convenient that it is also nearly optimal at bringing you personal benefits.

In response to Formative Youth
Comment author: Richard_Hollerith2 25 February 2009 09:59:17PM 3 points [-]

I doubt Retired was comparing you unfavorably to firefighters.

There is something very intemperate and one-sided about your writings about altruism. I would be much relieved if you would concede that in the scholarly, intellectual, scientific and ruling-administrative classes in the U.S., credible displays of altruistic feelings are among the most important sources of personal status (second only to scientific or artistic accomplishment and perhaps to social connections with others of high status). I agree with you that that situation is in general preferable to older situations in which wealth, connections to the ruling coalition, and ability to wield violence effectively (e.g., knights in shining armor) were larger sources of status, but that does not mean that altruism cannot be overdone.

I would be much relieve also if you would concede that your altruistic public statements and your hard work on a project with huge altruistic consequences have helped you personally much more than they have cost you. Particularly, most of your economic security derives from a nonprofit dependent on donations, and the kind of people who tend to donate are the kind of people who are easily moved by displays of altruism. Moreover, your altruistic public statements and your involvement in the altruistic project have allowed you to surround yourself with people of the highest rationality, educational accomplishments and ethical commitment. Having personal friendships with those sorts of people is extremely valuable. Consider that the human ability to solve problems is the major source of all wealth, and of course the people you have surrounded yourself with are the kind with the greatest ability to solve problems (while avoiding doing harm).

Comment author: Richard_Hollerith2 15 February 2009 03:57:28PM 0 points [-]

I love reality and try not to get caught up unnecessarily in whether something is of my mind or not of my mind.

Comment author: Richard_Hollerith2 09 February 2009 03:46:15PM 1 point [-]

I think the idea of self-improving AI is advertised too much. I would prefer that a person have to work harder or have to have more well-informed friends to learn about it.

Comment author: Richard_Hollerith2 09 February 2009 08:37:52AM 0 points [-]

But I'd been working on directly launching a Singularity movement for years, and it just wasn't getting traction. At some point you also have to say, "This isn't working the way I'm doing it," and try something different.

Eliezer, do you still think the Singularity movement is not getting any traction?

(My personal opinion is it has too much traction.)

Comment author: Richard_Hollerith2 31 January 2009 10:42:06AM -1 points [-]

I'd take the paperclips, so long as it wasn't running any sentient simulations.

A vast region of paperclips could conceivably after billions of years evolve into something interesting, so let us stipulate that the paperclipper wants the vast region to remain paperclips, so it remains to watch over its paperclips. Better yet, replace the paperclipper with a superintelligence that wants to pile all the matter it can reach into supermassive black holes; supermassive black holes with no ordinary matter nearby cannot evolved or be turned into anything interesting unless our model of fundamental reality is fundamentally wrong.

My question to Eliezer is, Would you take the supermassive black holes over the Babyeaters so long as the AI making the supermassive black holes is not running sentient simulations?

In response to Free to Optimize
Comment author: Richard_Hollerith2 30 January 2009 07:16:34PM 0 points [-]

My comment did not show up immediately, like it always has in the past, so I wrote it again. Oops!

In response to Free to Optimize
Comment author: Richard_Hollerith2 30 January 2009 10:03:00AM 0 points [-]

I do not consider the mere fact that something is a common instrumental value (i.e. has instrumental utility toward a wide range of goals) to be a good argument for assigning that thing intrinsic value. I have not outlined the argument that keeps me loyal to goal system zero because that is not what Robin Powell asked of me. It just so happens that the quickest and shortest explanation of goal system zero with which I am familiar uses common instrumental values.

Avoiding transformation into Goal System Zero is a nearly universal instrumental value

I agree. Do you consider that an argument against goal system zero? But Carl, the same argument applies to CEV and almost every other goal system.

It strikes me as probably more likely for an agent's goal system to transform into goal system zero than to transform into CEV. (But in a well engineered general intelligence, any change or transformation of the system of terminal goals strikes me as extremely unlikely.) Do you consider that an argument against goal system zero?

In response to Free to Optimize
Comment author: Richard_Hollerith2 30 January 2009 09:43:06AM -1 points [-]

Avoiding transformation into Goal System Zero is a nearly universal instrumental value

Do you claim that that is an argument against goal system zero? But, Carl, the same argument applies to CEV -- and almost every other goal system.

It strikes me as more likely that an agent's goal system will transform into goal system zero than it will transform into CEV. (But surely the probability of any change or transformation of terminal goal happening is extremely small in any well engineered general intelligence.)

Do you claim that that is an argument against goal system zero? If so, I guess you also believe that the fragility of the values to which Eliezer is loyal is a reason to be loyal to them. Do you? Why exactly?

I acknowledge that preserving fragile things usually has instrumental value, but if the fragile thing is a goal, I am not sure that that applies, and even if it does, I would need to be convinced that a thing's having instrumental value is evidence I should assign it intrinsic value.

Note that the fact that goal system zero has high instrumental utility is not IMHO a good reason to assign it intrinsic utility. I have not mentioned in this comment section what most convinces me to remain loyal to goal system zero; that is not what Robin Powell asked of me. (It just so happens that the shortest and quickest explanation I know of of goal system zero involves common instrumental values.)

View more: Next