whpearson comments on Contrarianism and reference class forecasting - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (90)
How should we unpack black boxes we don't have yet? For example a non-neural language capable self-maintaining goal-oriented system*.
We have a surfeit of potential systems (with different capabilities of self-inspection and self-modification) with no way to test whether they will fall into the above category or how big the category actually is.
*I'm trying to unpack AGI here somewhat