whpearson comments on Contrarianism and reference class forecasting - Less Wrong

26 Post author: taw 25 November 2009 07:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (90)

You are viewing a single comment's thread. Show more comments above.

Comment author: whpearson 27 November 2009 01:16:37AM 3 points [-]

How should we unpack black boxes we don't have yet? For example a non-neural language capable self-maintaining goal-oriented system*.

We have a surfeit of potential systems (with different capabilities of self-inspection and self-modification) with no way to test whether they will fall into the above category or how big the category actually is.

*I'm trying to unpack AGI here somewhat