You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MileyCyrus comments on Criticisms of intelligence explosion - Less Wrong Discussion

15 Post author: lukeprog 22 November 2011 05:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread.

Comment author: MileyCyrus 22 November 2011 06:25:30PM *  13 points [-]

Too much theory, not enough empirical evidence. In theory, FAI is an urgent problem that demands most of our resources (Eliezer is on the record saying that the only two legitimate occupations are working on FAI, and earning lots of money so you can donate money to other people working on FAI).

In practice, FAI is just another Pascal's mugging/ lifetime dilemma/ St. Petersburg Paradox. From XiXIDu's blog:

To be clear, extrapolations work and often are the best we can do. But since there are problems such as the above, that we perceive to be undesirable and that lead to absurd consequences, I think it is reasonable to ask for some upper and lower bounds regarding the use and scope of certain heuristics.

[...]

Taking into account considerations of vast utility or low probability quickly leads to chaos theoretic considerations like the butterfly effect. As a computationally bounded and psychical unstable agent I am unable to cope with that. Consequently I see no other way than to neglect the moral impossibility of extreme uncertainty.

Until [various rationality puzzles] are resolved, or sufficiently established, I will continue to put vastly more weight on empirical evidence and my intuition than on logical implications, if only because I still lack the necessary educational background to trust my comprehension and judgement of the various underlying concepts and methods used to arrive at those implications.

Comment author: lukeprog 22 November 2011 08:52:21PM 2 points [-]

Added.

Comment author: djcb 22 November 2011 07:27:20PM *  1 point [-]

I would also be very interested in seeing some smaller stepping stones implemented -- I imagine that creating an AGI (let alone FAI) will require massive amounts of maths, proofs and the like. It seems very useful to create artificialy intelligent mathematics software that can 'discover' and proof interesting theorems (and explain its steps). Of course, there is software that can proof relatively simple proofs, but there's nothing that could proof e.g. Fermat's Last Theorem -- we still need very smart humans for that.

Of course, it's extremely hard to create such software, but it would be much easier than AGI/FAI, and at the same time it can help with constructing those (and help in some other areas, say QM). The difficulty in constructing such software might also give us some understanding in the difficulties of constructing general artificial intelligence.