Rain comments on Muehlhauser-Wang Dialogue - Less Wrong

24 Post author: lukeprog 22 April 2012 10:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (284)

You are viewing a single comment's thread.

Comment author: Rain 23 April 2012 12:20:13PM *  8 points [-]

As I said about a previous discussion with Ben Goertzel, they seem to agree about the dangers, but not about how much the Singularity Institute might affect the outcome.

To rephrase the primary disagreement: "Yes, AIs are incredibly, world-threateningly dangerous, but there's nothing you can do about it."

This seems based around limited views of what sort of AI minds are possible or likely, such as an anthropomorphized baby which can be taught and studied similar to human children.

Comment author: Vladimir_Nesov 23 April 2012 12:28:13PM *  4 points [-]

To rephrase the primary disagreement: "Yes, AIs are incredibly, world-threateningly dangerous, but there's nothing you can do about it."

Is that really a disagreement? If the current SingInst can't make direct contributions, AGI researchers can, by not pushing AGI capability progress. This issue is not addressed, the heuristic of endorsing technological progress has too much support in researchers' minds to take seriously the possible consequences of following it in this instance.

In other words, there are separate questions of whether current SingInst is irrelevant and whether AI safety planning is irrelevant. If the status quo is to try out various things and see what happens, there is probably room for improvement over this process, even if particular actions of SingInst are deemed inadequate. Pointing out possible issues with SingInst doesn't address the relevance of AI safety planning.

Comment author: Rain 23 April 2012 05:03:22PM 0 points [-]

Pointing out possible issues with SingInst doesn't address the relevance of AI safety planning.

Agreed. But it does mean SI "loses the argument". Yahtzee!

Comment author: jacob_cannell 16 June 2012 07:31:40PM -1 points [-]

This seems based around limited views of what sort of AI minds are possible or likely, such as an anthropomorphized baby which can be taught and studied similar to human children.

The key difference between AI and other software is learning, and even current narrow AI systems require large learning/training times and these systems are only learning specific narrow functionalities.

Considering this, many (perhaps most?) AGI researchers believe that any practical human-level AGI will require an educational process much like human children do.