You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

diegocaleiro comments on Superintelligence Reading Group 3: AI and Uploads - Less Wrong Discussion

9 Post author: KatjaGrace 30 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (138)

You are viewing a single comment's thread. Show more comments above.

Comment author: diegocaleiro 02 October 2014 07:05:28AM 3 points [-]

This seems like an information hazard, since it has the form: This estimate for process X which may destroy the value of the future seems too low, also, not many people are currently studying X, which is surprising.

If X is some genetically engineered variety of smallpox, it seems clear that mentioning those facts is hazardous.

If the World didn't know about brain emulation, calling it an under-scrutinized area would be dangerous, relative to, say, just mention to a few safety savvy, x-risk savvy neuroscientist friends who would go on to design safety protocols for it, as well as, if possible, slow down progress in the field.

Same should be done in this case.

Comment author: Jeff_Alexander 03 October 2014 02:20:49AM 4 points [-]

If the idea is obvious enough to AI researchers (evolutionary approaches are not uncommon -- they have entire conferences dedicated to the sub-field)), then avoiding discussion by Bostrom et al. doesn't reduce information hazard, it just silences the voices of the x-risk savvy while evolutionary AI researchers march on, probably less aware of the risks of what they are doing than if the x-risk savvy keep discussing it.

So, to the extent this idea is obvious / independently discoverable by AI researchers, this approach should not be taken in this case.