GLaDOS comments on A belief propagation graph - Less Wrong

8 Post author: Dmytry 10 May 2012 04:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 10 April 2012 09:07:22AM *  5 points [-]

Your idea about latency in the context of belief propagation seems to have potential (and looks novel, as far as I can tell). It might be a good idea to develop the general theory a bit more, and give some simpler, clearer examples, before applying it to a controversial issue like AI risks. (If you're right about about how much rationalizations walk backwards along arrowed lines, then you ought to build up the credibility of your idea first before drawing an arrow from it to something we're easily biased about. Or, you yourself might be rationalizing, and your theory would fall apart if you examined it carefully by itself.)

Also, I think none of the biases you list, except perhaps for fiction, apply to me personally, but I still worry a lot about UFAI.

Note that the whole issue is strongly asymmetric in favour of similar considerations for not destroying the most unusual phenomena in the universe for many light years, versus destroying it, as destruction is an irreversible act that can be done later but can't be undone later.

In your scenario, even if humanity is preserved, we end up with a much smaller share of the universe than if we had built an FAI, right? If so, I don't think I could be persuaded to relax about UFAI based on this argument.

Comment author: GLaDOS 24 April 2012 06:29:35PM 1 point [-]

Your idea about latency in the context of belief propagation seems to have potential (and looks novel, as far as I can tell).

Up voted article because of this.