You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Viliam_Bur comments on Strong intutions. Weak arguments. What to do? - Less Wrong Discussion

17 Post author: Wei_Dai 10 May 2012 07:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread.

Comment author: Viliam_Bur 11 May 2012 08:30:27AM *  1 point [-]

It seems plausible that a fully implemented OpenCog system might display human-level or greater intelligence on feasible computational resources, and might turn out benevolent if raised properly.

Is there a disagreement about this? Perhaps not as great as it seems.

The idea of a superhuman software is generally accepted on LW. Whether OpenCog is the right platform, is a technical detail, which we can skip at the moment.

Might this software turn out benevolent, if raised properly? Let's become more specific about that part. If "might" only means "there is a nonzero probability of this outcome", LW agrees.

So we should rather ask how high is the probability that a "properly raised" OpenCog system will turn out "benevolent" -- depending on definitions of "benevolent" and "properly raised". That is the part which makes the difference.