Comment author: SforSingularity 25 November 2009 01:39:19AM *  0 points [-]

the singularity institute's budget grows much faster than linearly with cash. ... sunk all its income into triple-rollover lottery tickets

I had the same idea of buying very risky investments. Intuitively, it seems that world-saving probability is superlinear in cash. But I think that the intuition is probably incorrect, though I'll have to rethink now that someone else has had it.

Another advantage of buying triple rollover tickets is that if you adhere to quantum immortality plus the belief that uFAI reliably kills the world, then you'll win the lottery in all the worlds that you care about.

Comment author: SforSingularity 25 November 2009 01:19:15AM *  3 points [-]

I think that this is a great idea. I often find myself ending a debate with someone important and rational without the sense that our disagreement has been made explicit, and without a good reason for why we still disagree.

I suspect that if we imposed a norm on LW that said: every time two people disagree, they have to write down, at the end, why they disagree, we would do better.

Comment author: Eliezer_Yudkowsky 18 November 2009 05:11:34AM 1 point [-]

Asked Greene, he was busy.

Yes, it's possible that Greene is correct about what humanity ought to do at this point, but I think I know a bit more about his arguments than he does about mine...

Comment author: SforSingularity 18 November 2009 06:15:44AM 1 point [-]

That is plausible.

Comment author: Eliezer_Yudkowsky 18 November 2009 04:33:05AM 3 points [-]

Oh well in that case, we disagree about what reply we would hear if we asked a friendly AI how to talk and think about morality in order to maximize human welfare as construed in most traditional utilitarian senses.

This is phrased as a different observable, but it represents more of a disagreement about impossible possible worlds than possible worlds - we disagree about statements with truth conditions of the type of mathematical truth, i.e. which conclusions are implied by which premises. Though we may also have some degree of empirical disagreement about what sort of talk and thought leads to which personal-hedonic results and which interpersonal-political results.

(It's a good and clever question, though!)

Comment author: SforSingularity 18 November 2009 04:52:41AM 3 points [-]

we disagree about what reply we would hear if we asked a friendly AI how to talk and think about morality in order to maximize human welfare as construed in most traditional utilitarian senses.

Surely you should both have large error bars around the answer to that question in the form of fairly wide probability distributions over the set of possible answers. If you're both well-calibrated rationalists those distributions should overlap a lot. Perhaps you should go talk to Greene? I vote for a bloggingheads.

Comment author: Eliezer_Yudkowsky 18 November 2009 02:58:48AM 2 points [-]

You do agree that you and Greene are actually saying the same thing, yes?

I don't think we anticipate different experimental results. We do, however, seem to think that people should do different things.

Comment author: SforSingularity 18 November 2009 04:29:20AM *  3 points [-]

people should do different things.

Whose version of "should" are you using in that sentence? If you're using the EY version of "should" then it is not possible for you and Greene to think people should do different things unless you and Greene anticipate different experimental results...

... since the EY version of "should" is (correct me if I am wrong) a long list of specific constraints and valuators that together define one specific utility function U _ humanmoralityaccordingtoEY. You can't disagree with Greene over what the concrete result of maximizing U _ humanmoralityaccordingtoEY is unless one of you is factually wrong.

Comment author: Eliezer_Yudkowsky 18 November 2009 02:24:24AM 5 points [-]

Correct. I'm a moral cognitivist; "should" statements have truth-conditions. It's just that very few possible minds care whether should-statements are true or not; most possible minds care about whether alien statements (like "leads-to-maximum-paperclips") are true or not. They would agree with us on what should be done; they just wouldn't care, because they aren't built to do what they should. They would similarly agree with us that their morals are pointless, but would be concerned with whether their morals are justified-by-paperclip-production, not whether their morals are pointless. And under ordinary circumstances, of course, they would never formulate - let alone bother to compute - the function we name "should" (or the closely related functions "justifiable" or "arbitrary").

Comment author: SforSingularity 18 November 2009 02:52:30AM 1 point [-]

Correct. I'm a moral cognitivist;

I think you're just using different words to say the same thing that Greene is saying, you in particular use "should" and "morally right" in a nonstandard way - but I don't really care about the particular way you formulate the correct position, just as I wouldn't care if you used the variable "x" where Greene used "y" in an integral.

You do agree that you and Greene are actually saying the same thing, yes?

Comment author: Alicorn 08 November 2009 10:40:22PM 4 points [-]

Such an AI wouldn't be able to interact with us, even verbally.

Comment author: SforSingularity 08 November 2009 11:52:50PM 1 point [-]

Alicorn, I hereby award you 10 points. These are redeemable after the singularity for kudos, catgirls and other cool stuff.

Comment author: AngryParsley 08 November 2009 10:58:09AM *  1 point [-]

I was also interested in the discussion on AI risk reduction strategies. Although SIAI espouses friendly AI, there hasn't been much thought about risk mitigation for possible unfriendly AIs. One example is the AI box. Although it is certainly not 100% effective, it's better than nothing (assuming it doesn't encourage people to run more UFAIs). Another would be to program an unfriendly AI with goals that would cause it to behave in a manner such that it does not destroy the world. For example, having a goal of not going outside its box.

While the problem of friendly AI is hard enough to make people give up, I also think the problem of controlling unfriendly AI is hard enough to make some of the pro-FAI people do the same.

Comment author: SforSingularity 08 November 2009 10:39:12PM *  1 point [-]

For example, having a goal of not going outside its box.

It would be nice if you could tell an AI not to affect anything outside its box.

10 points will be awarded to the first person who spots why "don't affect anything outside your box" is problematic.

In response to Bay area LW meet-up
Comment author: SforSingularity 08 November 2009 10:16:21AM 1 point [-]

Great meetup; conversation was had about the probability of AI risk. Initially I thought that the probability of AI disaster was close to 5%, but speaking to Anna Salamon convinced me that it was more like 60%.

Also some discussion about what strategies to follow for AI friendliness.

In response to Bay area LW meet-up
Comment author: SforSingularity 06 November 2009 07:06:12PM 3 points [-]

I'm traveling to the west coast especially for this. Hoping to see you all there.

View more: Prev | Next