jacob_cannell comments on Leaving LessWrong for a more rational life - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (268)
That's a good summary of your post.
I largely agree, but to be fair we should consider that MIRI started working on AI safety theory long before the technology required for practical experimentation with human-level AGI - to do that you need to be close to AGI in the first place.
Now that we are getting closer, the argument for prioritizing experiments over theory becomes stronger.