cousin_it comments on Ben Goertzel on Charity - Less Wrong

1 Post author: XiXiDu 09 March 2011 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (74)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 09 March 2011 08:44:00PM *  5 points [-]

I don't, but it is evidence that people disagree with the SIAI and think that there are more effective ways towards a positive Singularity.

Did you think that many LWers weren't aware of this fact? I would have thought that everyone already knew...

Don't forget that he once worked for the SIAI. If Michael Vassar was to leave the SIAI and start his own project, wouldn't that be evidence about the SIAI?

I'm curious if you've seen this discussion, which occurred while Ben was still Research Director of SIAI.

ETA: I see that you commented in that thread several months after the initial discussion, so you must have read it. I suppose the problem from your perspective is that you can't really distinguish between Eliezer and Ben. They each think their own approach to a positive Singularity is the best one, and think the other one is incompetent/harmless. You don't know enough to judge the arguments on the object level. LW mostly favors Eliezer, but that might just be groupthink. I'm not really sure how to solve this problem, actually... anyone else have ideas?

Comment author: cousin_it 09 March 2011 09:29:50PM *  2 points [-]

Here's an argument that may influence XiXiDu: people like Scott Aaronson and John Baez find Eliezer's ideas worth discussing, while Ben doesn't seem to have any ideas to discuss.

Comment author: XiXiDu 10 March 2011 09:52:47AM *  2 points [-]

...while Ben doesn't seem to have any ideas to discuss.

That an experimental approach is the way to go. I believe we don't know enough about the nature of AGI to solely follow an theoretical approach right now. That is one of the most obvious shortcomings of the SIAI in my opinion.

Comment author: ferrouswheel 10 March 2011 06:39:25AM 0 points [-]

Or perhaps it could be that Ben is too busy actually developing and researching AI to spend time discussing them ad nauseum? I stopped following many mailing lists or communities like this because I don't actually have time to argue in circles with people.

(But make an exception when people start making up untruths about OpenCog)

Comment author: cousin_it 10 March 2011 08:22:27AM *  1 point [-]

Even if they don't want to discuss their insights "ad nauseum", I need some indication that they have new insights. Otherwise they won't be able to build AI. "Busy developing and researching" doesn't look very promising from the outside, considering how many other groups present themselves the same way.

Comment author: ferrouswheel 10 March 2011 08:33:21AM 2 points [-]

Ben's publishing several books (well, he's already published several, but he's publishing the already written "Building Better Minds" early 2012 and a pop sci version shortly there after which are more current regarding OpenCog). I'll be writing a "practical" guide to OpenCog once we reach our 1.0 release at the end of 2012.

Ben actually does quite a lot of writing, theorizing and conferences. Whereas myself and a number of others are more concerned with the software development side of things.

We also have a wiki: http://wiki.opencog.org

Comment author: cousin_it 10 March 2011 08:37:54AM *  1 point [-]

What new insights are there?

Comment author: ferrouswheel 10 March 2011 09:57:13AM 0 points [-]

Well new is relative... so without any familiarity of your knowledge on OpenCog I can't comment.

Comment author: cousin_it 10 March 2011 10:04:29AM *  1 point [-]

New insights relative to the current state of academia. Many of us here are up-to-date with the relevant areas (or trying to be). I'm not sure what my knowledge of OpenCog has to do with anything, as I was asking for the benefit of all onlookers too.

Comment author: Vladimir_Nesov 10 March 2011 12:48:53PM 1 point [-]

Even if they don't want to discuss their insights "ad nauseum", I need some indication that they have new insights. Otherwise they won't be able to build AI.

Evolution managed to do that without any capacity for having insights. It's not out of the question that enough hard work without much understanding would suffice, particularly if you use the tools of mainstream AI (machine learning).

Also, just "success" is not something one would wish to support (success at exterminating humanity, say, is distinct from success in exterminating polio), so the query about which institution is more likely to succeed is seriously incomplete.

Comment author: timtyler 10 March 2011 09:40:44PM *  0 points [-]

Ben doesn't seem to have any ideas to discuss.

Ben rarely seems short on ideas. For some recent ones, perhaps start with these: