You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

HungryHobo comments on Another type of intelligence explosion - Less Wrong Discussion

16 Post author: Stuart_Armstrong 21 August 2014 02:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread. Show more comments above.

Comment author: HungryHobo 22 August 2014 10:41:01AM *  1 point [-]

In that example you propose someone giving a thin AI with a very general goal which would require a lot of general intelligence to even understand.

If you have an AI which understands biochemistry you'd give it a goal like "design me a protein which binds to this molecule" not "maximize goodness and minimize badness"

The only way what you're proposing would work would be for it to be a general AI with merely human level abilities in most areas combined with a small number of areas of extreme expertise. that is not a thin AI or a non-general AI.

Comment author: Stuart_Armstrong 22 August 2014 10:52:06AM 2 points [-]

It seems the general goal could be cashed out in simple ways, with biochemistry, epidemeology, and a (potentially flawed) measure of "health".

Comment author: jsalvatier 28 August 2014 06:57:08PM 1 point [-]

I think you're sneaking in a lot with the measure of health. As far as I can see, the only reason its dangerous is because it caches out in the real world, on the real broad population rather than a simulation. Having the AI reason about a drugs effects on a real world population definitely seems like a general skill, not a narrow skill.