Comment author: Paul_Gowder2 29 September 2008 06:45:11AM 0 points [-]

(Noting that the math-ey version of that reason has just been stated by Peter and Psy-kosh.)

Comment author: Paul_Gowder2 29 September 2008 06:44:00AM 1 point [-]

I rather like the 3rd answer on his blog (Doug D's). A slight elaboration on that -- one virtue of a scientific theory is its generality, and prediction is a better way of determining generality than explanation -- demanding predictive power from a theory excludes ad hoc theories of the sort Doug D mentioned, that do nothing more than re-state the data. This reasoning, note, does not require any math. :-)

Comment author: Paul_Gowder2 08 August 2008 01:18:46AM 7 points [-]

Eliezer, you sometimes make me think that the solution to the friendly AI problem is to pass laws mandating death by torture for anyone who even begins to attempt to make a strong AI, and hope that we catch them before they get far enough.

View more: Prev