Viliam_Bur comments on "Can we know what to do about AI?": An Introduction - Less Wrong

17 Post author: JonahSinick 09 July 2013 06:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (69)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 10 July 2013 08:53:08AM 1 point [-]

Write an expert system for making philosophical statements about itself.

Make it scientific articles instead. Thus MIRI will get more publications. :D

You can also make different expect systems compete with each other by trying to get most publications and citations.

Comment author: afterburger 19 July 2013 03:07:36AM *  0 points [-]

That sounds exciting too. I don't know enough about this field to get into a debate about whether to save the metaphorical whales or the metaphorical pandas first. Both approaches are complicated. I am glad the MIRI exists, and I wish the researchers good luck.

My main point re: "steel-manning" the MIRI mission is that you need to make testable predictions and then test them or else you're just doing philosophy and/or politics.

Comment author: JoshuaZ 10 July 2013 12:42:53PM *  0 points [-]

Make it scientific articles instead. Thus MIRI will get more publications. :D

I suspect that either would be of sufficient interest that if well done it could get published. Also, there's a danger in going down research avenues simply because they are more publishable.

You can also make different expect systems compete with each other by trying to get most publications and citations.

So instead o f paper clip maximizers we end up with a world turned into researchpapertronium?

(This last bit is a joke- I think your basic idea is sound.)