cousin_it comments on Ben Goertzel on Charity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (74)
Did you think that many LWers weren't aware of this fact? I would have thought that everyone already knew...
I'm curious if you've seen this discussion, which occurred while Ben was still Research Director of SIAI.
ETA: I see that you commented in that thread several months after the initial discussion, so you must have read it. I suppose the problem from your perspective is that you can't really distinguish between Eliezer and Ben. They each think their own approach to a positive Singularity is the best one, and think the other one is incompetent/harmless. You don't know enough to judge the arguments on the object level. LW mostly favors Eliezer, but that might just be groupthink. I'm not really sure how to solve this problem, actually... anyone else have ideas?
Here's an argument that may influence XiXiDu: people like Scott Aaronson and John Baez find Eliezer's ideas worth discussing, while Ben doesn't seem to have any ideas to discuss.
That an experimental approach is the way to go. I believe we don't know enough about the nature of AGI to solely follow an theoretical approach right now. That is one of the most obvious shortcomings of the SIAI in my opinion.
Or perhaps it could be that Ben is too busy actually developing and researching AI to spend time discussing them ad nauseum? I stopped following many mailing lists or communities like this because I don't actually have time to argue in circles with people.
(But make an exception when people start making up untruths about OpenCog)
Even if they don't want to discuss their insights "ad nauseum", I need some indication that they have new insights. Otherwise they won't be able to build AI. "Busy developing and researching" doesn't look very promising from the outside, considering how many other groups present themselves the same way.
Ben's publishing several books (well, he's already published several, but he's publishing the already written "Building Better Minds" early 2012 and a pop sci version shortly there after which are more current regarding OpenCog). I'll be writing a "practical" guide to OpenCog once we reach our 1.0 release at the end of 2012.
Ben actually does quite a lot of writing, theorizing and conferences. Whereas myself and a number of others are more concerned with the software development side of things.
We also have a wiki: http://wiki.opencog.org
What new insights are there?
Well new is relative... so without any familiarity of your knowledge on OpenCog I can't comment.
New insights relative to the current state of academia. Many of us here are up-to-date with the relevant areas (or trying to be). I'm not sure what my knowledge of OpenCog has to do with anything, as I was asking for the benefit of all onlookers too.
Evolution managed to do that without any capacity for having insights. It's not out of the question that enough hard work without much understanding would suffice, particularly if you use the tools of mainstream AI (machine learning).
Also, just "success" is not something one would wish to support (success at exterminating humanity, say, is distinct from success in exterminating polio), so the query about which institution is more likely to succeed is seriously incomplete.
Ben rarely seems short on ideas. For some recent ones, perhaps start with these:
The GOLEM Eats the Chinese Parent
Coherent Aggregated Volition