You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Manfred comments on [Link] Suffering-focused AI safety: Why “fail-safe” measures might be particularly promising - Less Wrong Discussion

8 Post author: wallowinmaya 21 July 2016 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (5)

You are viewing a single comment's thread.

Comment author: Manfred 21 July 2016 09:48:54PM 12 points [-]

Oh my gosh, the negative utilitarians are getting into AI safety. Everyone play it cool and try not to look like you're suffering.

Comment author: Wei_Dai 22 July 2016 03:49:22PM 7 points [-]

That's funny. :) But these people actually sound remarkably sane. See here and here for example.

Comment author: The_Jaded_One 23 July 2016 12:58:00PM 6 points [-]

Just commenting to point out that I'm having a fabulous day, and have a very painless, enjoyable life. I struggle to even understand what suffering is, to be honest, so make a note of that any negative utilitarians who may be listening!

Comment author: Mac 22 July 2016 01:41:50PM *  6 points [-]

Foundational Research Institute promotes compromise with other value systems. See their work here, here, here, and quoted section in the OP.

Rest easy, negative utilitarians aren't coming for you.