Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

wedrifid comments on SIAI - An Examination - Less Wrong

143 Post author: BrandonReinhart 02 May 2011 07:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (203)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 05 May 2011 04:49:08PM 2 points [-]

CEV doesn't seem to fit this description.

CEV is one of the things which, if actually explored thoroughly, would definitely fit this description. As it is it is at the 'bullshit border'. That is, a point at which you don't yet have to trade off epistemic considerations in favor of signalling to the lowest common denominator. Because it is still credible that the not-superficially-nice parts just haven't been covered yet - rather than being outright lied about.

Comment author: katydee 05 May 2011 05:04:03PM 2 points [-]

Do you have evidence for this proposition?

Comment author: PhilGoetz 15 May 2011 04:50:15AM *  4 points [-]

I agree entirely with both of wedifrid's comments above. Just read the CEV document, and ask, "If you were tasked with implementing this, how would you do it?" I tried unsuccessfully many times to elicit details from Eliezer on several points back on Overcoming Bias, until I concluded he did not want to go into those details.

One obvious question is, "The expected value calculations that I make from your stated beliefs indicate that your Friendly AI should prefer killing a billion people over taking a 10% chance that one of them is developing an AI; do you agree?" (If the answer is "no", I suspect that is only due to time discounting of utility.)

Comment author: DrRobertStadler 13 September 2011 08:51:02PM 1 point [-]

Surely though if the FAI is in a position to be able to execute that action, it is in a position where it is so far ahead of an AI someone could be developing that it would have little fear of that possibility as a threat to CEV?

Comment author: PhilGoetz 15 September 2011 10:18:08PM 1 point [-]

It won't be very far ahead of an AI in realtime. The idea that the FAI can get far ahead, is based on the idea that it can develop very far in a "small" amount of time. Well, so can the new AI - and who's to say it can't develop 10 times as quickly as the FAI? So, how can a one-year-old FAI be certain that there isn't an AI project that has been developed secretly 6 months ago and is about to overtake it in itelligence?

Comment author: wedrifid 05 May 2011 05:30:33PM 0 points [-]

It is a somewhat complex issue, best understood by following what is (and isn't) said in conversations along the lines of CEV (and sometimes metaethics) when the subject comes up. I believe the last time was a month or two ago in one of lukeprog's posts.

Mind you this is a subject that would take a couple of posts to properly explore.

Comment author: Will_Newsome 13 May 2011 01:12:51AM *  1 point [-]

Because it is still credible that the not-superficially-nice parts just haven't been covered yet - rather than being outright lied about.

Isn't exploring the consequences of something like CEV pretty boring? Naively, the default scenario conditional on a large amount of background assumptions about relative optimization possible from various simulation scenarios et cetera is that the FAI fooms along possibly metaphysical spatiotemporal dimensions turning everything into acausal economic goodness. Once you get past the 'oh no that means it kills everything I love' part it's basically a dead end. No? Note: the publicly acknowledged default scenario for a lot of smart people is a lot more PC than this. It's probably not default for many people at all. I'm not confident in it.

Comment author: Dorikka 13 May 2011 01:45:11AM 4 points [-]

the FAI fooms along possibly metaphysical spatiotemporal dimensions turning everything into acausal economic goodness.

I don't really understand what this means, so I don't see why the next bit follows. Could you break this down, preferably using simpler terms?