Comment author: Silas 04 September 2008 06:42:56PM 11 points [-]

By the way:

Human: "What do you care about 3 paperclips? Haven't you made trillions already? That's like a rounding error!" Paperclip Maximizer: "How can you talk about paperclips like that?"

***

PM: "What do you care about a billion human algorithm continuities? You've got virtually the same one in billions of others! And you'll even be able to embed the algorithm in machines one day!" H: "How can you talk about human lives that way?"

Comment author: Silas 04 September 2008 06:21:13PM 10 points [-]

Wait wait wait: Isn't this the same kind of argument as in the dilemma about "We will execute you within the next week on a day that you won't expect"? (Sorry, don't know the name for that puzzle.) In that one, the argument goes that if it's the last day of the week, the prisoner knows that's the last chance they have to execute him, so he'll expect it, so it can't be that day. But then, if it's the next-to-last day, he knows they can't execute him on the last day, so they have to execute him on that next-to-last day. But then he expects it! And so on.

So, after concluding they can't execute him, they execute him on Wednesay. "Wait! But I concluded you can't do this!" "Good, then you didn't expect it. Problem solved."

Just as in that problem, you can't stably have an "(un)expected execution day", you can't have an "expected future irrelevance" in this one.

Do I get a prize? No? Okay then.

In response to Dreams of AI Design
Comment author: Silas 27 August 2008 10:12:19PM 0 points [-]

Vassar handles personal networking? Dang, then I probably shouldn't have mouthed off at Robin right after he praised my work.

In response to Dreams of AI Design
Comment author: Silas 27 August 2008 08:01:20PM 0 points [-]

Eliezer, if the US government announced a new Manhattan Project-grade attempt to be the first to build AGI, and put you in charge, would you be able to confidently say how such money should be spent in order to make genuine progress on such a goal?

In response to "Arbitrary"
Comment author: Silas 12 August 2008 10:06:23PM 2 points [-]

It would really rock if you could show the context in which someone used the word "arbitrary" but in a way that just passed the recursive buck.

Here's where I would use it:

[After I ask someone a series of questions about whether certain actions would be immoral]

Me: Now you're just being arbitrary! Eliezer Yudkowsky: Taboo "arbitrary"! Me: Okay, he's deciding what's immoral based on whim. Eliezer Yudkowsky: Taboo "whim"! Me: Okay, his procedures for deciding what's immoral can't be articulated with finite words to a stranger such that he, and the stranger using his morality articulation, yield the same answers to all morality questions.

I'll send my salary requirements if you want. ;-)

Comment author: Silas 08 August 2008 04:24:11AM 4 points [-]

Wait a sec: I'm not sure people *do* outright avoid modifying their own desires so as to make the desires easier to satisfy, as you are claiming here:

We, ourselves, do not imagine the future and judge, that any future in which our brains want something, and that thing exists, is a good future. If we did think this way, we would say: "Yay! Go ahead and modify us to strongly want something cheap!"

Isn't that exactly what people do when they study ascetic philosophies and otherwise try to see what living simply is like? And would people turn down a pill that made vegetable juice taste like a milkshake and vice versa?

In response to Hiroshima Day
Comment author: Silas 07 August 2008 03:32:35AM 0 points [-]

Who cares, I want to hear about building AIs.

In response to The Meaning of Right
Comment author: Silas 29 July 2008 06:03:00PM 0 points [-]

Matt Simpson: Many an experiment has been thought for the sole purpose of showing how utilitarianism is in direct conflict with our moral intuitions.

I disagree, or you're referring to something I haven't heard of. If I know what you mean here, those are a species of strawman ("act") utilitarianism that doesn't account for the long-term impact and adjustment of behavior that results.

(I'm going to stop giving the caveats; just remember that I accept the possibility you're referring to something else.)

For example, if you're thinking about cases where people would be against a doctor deciding to carve up a healthy patient against his will to save ~40 others, that's not rejection of utilitarianism. It can be recognition that once a doctor does that, people will avoid them in droves, driving up risks all around.

Or, if you're referring to the case of how people would e.g. refuse to divert a train's path so it hits one person instead of five, that's not necessarily an anti-utilitarian intuition; there are many factors at play in such a scenario. For example, the one person may be standing in a normally safe spot and so consented to a lower level of risk, and so by diverting the train, you screw up the ability of people to see what's really safe, etc.

Comment author: Silas 24 July 2008 04:39:24PM 0 points [-]

"If the federal government hadn't bought so much stuff from GM, GM would be a lot smaller today." "If the federal government hadn't bought so much stuff from GM, GM would have instead been tooling up to produce stuff other buyers did want and thus could very well have become successful that way."

???

Comment author: Silas 16 July 2008 03:08:22PM 2 points [-]

I think a lot of people are confusing a) improved ability to act morally, and b) improved moral wisdom.

Remember, things like "having fewer deaths, conflicts" does not mean moral progress. It's only moral progress if people in general change their _evaluation of the merit_ of e.g. fewer deaths, conflicts.

So it really is a difficult question Eliezer is asking: can you imagine how you would have/achieve greater moral wisdom in the future, as evaluated with your *present* mental faculties?

My best answer is yes, in that I can imagine being better able to discern *inherent conflict* between certain moral principles. Haphazard example: today, I might believe that a) assaulting people out-of-the-blue is bad, and b) credibly demonstrating ability to fend off assaulters are good. In the future, I might notice that these come into conflict, that if people value both of these, some people will inevitably have a utility function that encourages them to do a), and this is unavoidable. So then I find out more precisely how much of one comes at how much cost of the other, and that persuing certain combinations of them is impossible.

I call that moral progress. Am I right, assuming the premises?

View more: Prev | Next