NancyLebovitz comments on An inflection point for probability estimates of the AI takeoff? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
On the other hand, is there any way to think about the odds of humans inventing a program capable of self-optimization which doesn't resemble a human mind?
I'm not sure.
I think if I had a better grasp of whether and why I think humans are (aren't) capable of building self-optimizing systems at all, I would have a better grasp of the odds of them being of particular types.