Desrtopa comments on The 5-Second Level - Less Wrong

111 Post author: Eliezer_Yudkowsky 07 May 2011 04:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (310)

You are viewing a single comment's thread. Show more comments above.

Comment author: NancyLebovitz 07 May 2011 07:54:29AM 6 points [-]

why is it that once you try out being in a rationalist community you can't bear the thought of going back

Nitpick: It took me a bit to realize you meant "going back to being among non-rationalists" rather than "going back to the meeting".

Or you could start talking about feminism, in which case you can say pretty much anything and it's bound to offend someone. (Did that last sentence offend you? Pause and reflect!)

Unfortunately I recognize that as the bitter truth, so it's of no use for me for training purposes.

Here's something which might work as an indignation test-- could it be a good move for an FAI to set a limit on human intelligence?

If an AI can be built, it has been shown that humanity is an AI-creating species. As technology and the promulgation of human knowledge improves, it will become easier and easier to make AIs, and the risk of creating a UFAI that the FAI can't defeat goes up.

It will be easier to have people who can't make AIs than to try to control the tech and knowledge comprehensively enough to make sure there are no additional FOOMs.

I considered limiting initiative (imposing akrasia) rather than intelligence, but I think that would impact a wider range of human values.

Comment author: Desrtopa 09 May 2011 08:40:11PM 1 point [-]

If an AI can be built, it has been shown that humanity is an AI-creating species. As technology and the promulgation of human knowledge improves, it will become easier and easier to make AIs, and the risk of creating a UFAI that the FAI can't defeat goes up.

I would think that something as much more intelligent than humans as the FAI would be able to prevent humans from creating an UFAI that could defeat it without limiting their intelligence.