ferrouswheel comments on Ben Goertzel on Charity - Less Wrong

1 Post author: XiXiDu 09 March 2011 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (74)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 09 March 2011 09:29:50PM *  2 points [-]

Here's an argument that may influence XiXiDu: people like Scott Aaronson and John Baez find Eliezer's ideas worth discussing, while Ben doesn't seem to have any ideas to discuss.

Comment author: ferrouswheel 10 March 2011 06:39:25AM 0 points [-]

Or perhaps it could be that Ben is too busy actually developing and researching AI to spend time discussing them ad nauseum? I stopped following many mailing lists or communities like this because I don't actually have time to argue in circles with people.

(But make an exception when people start making up untruths about OpenCog)

Comment author: cousin_it 10 March 2011 08:22:27AM *  1 point [-]

Even if they don't want to discuss their insights "ad nauseum", I need some indication that they have new insights. Otherwise they won't be able to build AI. "Busy developing and researching" doesn't look very promising from the outside, considering how many other groups present themselves the same way.

Comment author: ferrouswheel 10 March 2011 08:33:21AM 2 points [-]

Ben's publishing several books (well, he's already published several, but he's publishing the already written "Building Better Minds" early 2012 and a pop sci version shortly there after which are more current regarding OpenCog). I'll be writing a "practical" guide to OpenCog once we reach our 1.0 release at the end of 2012.

Ben actually does quite a lot of writing, theorizing and conferences. Whereas myself and a number of others are more concerned with the software development side of things.

We also have a wiki: http://wiki.opencog.org

Comment author: cousin_it 10 March 2011 08:37:54AM *  1 point [-]

What new insights are there?

Comment author: ferrouswheel 10 March 2011 09:57:13AM 0 points [-]

Well new is relative... so without any familiarity of your knowledge on OpenCog I can't comment.

Comment author: cousin_it 10 March 2011 10:04:29AM *  1 point [-]

New insights relative to the current state of academia. Many of us here are up-to-date with the relevant areas (or trying to be). I'm not sure what my knowledge of OpenCog has to do with anything, as I was asking for the benefit of all onlookers too.

Comment author: Vladimir_Nesov 10 March 2011 12:48:53PM 1 point [-]

Even if they don't want to discuss their insights "ad nauseum", I need some indication that they have new insights. Otherwise they won't be able to build AI.

Evolution managed to do that without any capacity for having insights. It's not out of the question that enough hard work without much understanding would suffice, particularly if you use the tools of mainstream AI (machine learning).

Also, just "success" is not something one would wish to support (success at exterminating humanity, say, is distinct from success in exterminating polio), so the query about which institution is more likely to succeed is seriously incomplete.