You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Jeff_Alexander comments on Superintelligence Reading Group 3: AI and Uploads - Less Wrong Discussion

9 Post author: KatjaGrace 30 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (138)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jeff_Alexander 03 October 2014 02:20:49AM 4 points [-]

If the idea is obvious enough to AI researchers (evolutionary approaches are not uncommon -- they have entire conferences dedicated to the sub-field)), then avoiding discussion by Bostrom et al. doesn't reduce information hazard, it just silences the voices of the x-risk savvy while evolutionary AI researchers march on, probably less aware of the risks of what they are doing than if the x-risk savvy keep discussing it.

So, to the extent this idea is obvious / independently discoverable by AI researchers, this approach should not be taken in this case.