You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

faul_sname comments on Irrationality Game II - Less Wrong Discussion

13 [deleted] 03 July 2012 06:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (380)

You are viewing a single comment's thread. Show more comments above.

Comment author: faul_sname 05 July 2012 03:08:57PM 0 points [-]

It all depends on whether an AGI can start out significantly past human intelligence. If the answer is no, then it's really not a significant danger. If the answer is yes, then it will be able to determine alternatives we can't.

Also, even a small group of humans could swing the election for Mayor of London. An AGI with a few million dollars at its disposal might be able to hire such a group.

Comment author: TheOtherDave 05 July 2012 03:45:16PM 3 points [-]

whether an AGI can start out significantly past human intelligence. If the answer is no, then it's really not a significant danger. If the answer is yes, then it will be able to determine alternatives we can't.

It's perhaps also worth asking whether intelligence is as linear as all that.

If an AGI is on aggregate lower than human intelligence, but is architected differently than humans such that areas of mindspace are available to it that humans are unable to exploit due to our cognitive architecture (in a sense analogous to how humans are better general-purpose movers-around than cars, but cars can nevertheless perform certain important moving-around tasks far better than humans) then that AGI may well have a significant impact on our environment (much as the invention of cars did).

Whether this is a danger or not depends a lot on specifics, but in terms of pure threat capacity... well, anything that can significantly change the environment can significantly damage those of us living in that environment.

All of that said, it seems clear that the original context was focused on a particular set of problems, and concerned with the theoretical ability of intelligences to solve problems in that set. The safety/danger/effectiveness of intelligence in a broader sense is, I think, beside the OP's point. Maybe.

Comment author: TimS 05 July 2012 03:18:47PM 2 points [-]

Yes, that is the key question. I suspect that AGI will be human-level intelligent for some amount of time (maybe only a few seconds). So the question of how the AGI gets smarter than that is very important in analyzing the likelihood of FOOM.

Re: Elections - hundreds of millions dollars might affect whether Boehner or Pelosi was president of the United States in 2016. There's essentially no chance that that amount of money could make me President in 2016.

Comment author: faul_sname 05 July 2012 05:42:13PM 0 points [-]

Perhaps not make you president, but that amount of money and an absence of moral qualms could probably give you equivalent ability to get things done. President of the US is considerably more difficult than mayor of London (I think). However, both of those seem to be less than maximally efficient at accomplishing specific goals. For that, you'd want to become the CEO of a large company or something similar (which you could probably do with $1-500M, depending on the company. Or perhaps CIO or CFO if that suits your interests better.