You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on A Brief Overview of Machine Ethics - Less Wrong Discussion

6 Post author: lukeprog 05 March 2011 06:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (90)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 06 March 2011 10:34:01AM *  3 points [-]

But if you think I'm not much of a critic of SIAI/Yudkowsky, you're right. Many of my posts have included minor criticisms, but that's because it's not as valued here to just repeat all the thousands of things on which I agree with Eliezer.

I actually messaged him telling him that he can edit/delete any harmful submissions of mine without having to expect harmful protest. Does that look like I particularly disagree with him, or assign a high probability to him being Dr. Evil? I don't, but it is a possibility and it is widely ignored. To get provable friendly AI you'll need provable friendly humans. If that isn't possible you'll need oversight and transparency.

  • Smart people can be wrong.
  • Smart people can be evil.
  • People can appear smarter than they are.

That's why I demand...

  • Third-party peer-review of Yudkowsky's work.
  • Oversight and transparency.
  • Progress reports, roadmaps and confirmable success.
Comment author: wedrifid 06 March 2011 10:45:12AM 2 points [-]

To get provable friendly AI you'll need provable friendly humans.

Not actually true.

Comment author: XiXiDu 06 March 2011 11:33:37AM *  1 point [-]

Not actually true.

Technically it isn't of course. But I don't expect unfriendly humans not to show me friendly AI but actually implement something else. What I meant is that you'll need friendly humans to not end up with some trickster who takes your money and in 30 years you notice that all he has done is to code some chat bot. There are a lot of reasons that the trustworthiness of the humans involved is important. Of course, provable friendly AI is provable friendly no matter who coded it.