You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on Another type of intelligence explosion - Less Wrong Discussion

16 Post author: Stuart_Armstrong 21 August 2014 02:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 22 August 2014 10:12:17AM 3 points [-]

you seem to be saying almost the same thing as in your other post.

It's a consequence of that other post's idea, yes.

it's a screwup, not a species ending event.

General intelligences are more threatening, but I don't think we can safely dismiss narrower ones in certain positions (eg the drug designing AI in this post http://lesswrong.com/lw/kte/an_example_of_deadly_nongeneral_ai/ ).

Comment author: HungryHobo 22 August 2014 10:41:01AM *  1 point [-]

In that example you propose someone giving a thin AI with a very general goal which would require a lot of general intelligence to even understand.

If you have an AI which understands biochemistry you'd give it a goal like "design me a protein which binds to this molecule" not "maximize goodness and minimize badness"

The only way what you're proposing would work would be for it to be a general AI with merely human level abilities in most areas combined with a small number of areas of extreme expertise. that is not a thin AI or a non-general AI.

Comment author: Stuart_Armstrong 22 August 2014 10:52:06AM 2 points [-]

It seems the general goal could be cashed out in simple ways, with biochemistry, epidemeology, and a (potentially flawed) measure of "health".

Comment author: jsalvatier 28 August 2014 06:57:08PM 1 point [-]

I think you're sneaking in a lot with the measure of health. As far as I can see, the only reason its dangerous is because it caches out in the real world, on the real broad population rather than a simulation. Having the AI reason about a drugs effects on a real world population definitely seems like a general skill, not a narrow skill.