timtyler comments on Students asked to defend AGI danger update in favor of AGI riskiness - Less Wrong

3 Post author: lukeprog 18 October 2011 05:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 19 October 2011 05:41:04PM *  -1 points [-]

Are you implying that most AGI's (assume these intelligences can go FOOM) would not result in human extinction?

Questions about fractions of infinite sets require an enumeration strategy to be specified - or they don't make much sense. Assuming lexicographic ordering of their source code - and only considering the set of superintelligent programs - no: I don't mean to imply that.