gjm comments on Is That Your True Rejection? - Less Wrong

47 Post author: Eliezer_Yudkowsky 06 December 2008 02:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Mader_Levap 18 September 2015 10:22:53AM 0 points [-]

"Which transhumanist ideas are "not even wrong"?"

Technological Singularity, for example (as defined in Wikipedia). In my view, it is just atheistic version of Rapture or The End Of World As We Know It endemic in various cults and equally likely.

Reason for that is that recursive self-improvement is not possible, since it requires perfect self-knowledge and self-understanding. In reality, AI will be black box to itself, like our brains are black boxes to ourself.

More precisely, my claim is that any mind on any level of complexity is insuficient to understand itself. It is possible for more advanced mind to understand simpler mind, but it obviously does not help very much in context of direct self-improvement.

AI with any self-preservation instincts would be as likely to willingly preform direct self-modification to its mind as you to get stabbed by icepick through eyesocket.

So any AI improvement would have to be done old way. Slow way. No fast takeoff. No intelligence explosion. No Singularity.

Comment author: gjm 11 January 2016 05:51:03PM 4 points [-]

Our brains are mysterious to us not simply because they're our brains and no one can fully understand themselves, but because our brains are the result of millions of years of evolutionary kludges and because they're made out of hard-to-probe meat. We are baffled by chimpanzee brains or even rabbit brains in many of the same ways as we're baffled by human brains.

Imagine an intelligent agent whose thinking machinery is designed differently from ours. It's cleanly and explicitly divided into modules. It comes with source code and comments and documentation and even, in some cases, correctness proofs. Maybe there are some mysterious black boxes; they come with labels saying "Mysterious Black Box #115. Neural network trained to do X. Empirically appears to do X reliably. Other components assume only that it does X within such-and-such parameters.". Its hardware is made out of (notionally) discrete components with precise specifications, and comes with some analysis to show that if the low-level components meet the spec then the overall function of the hardware should be as documented.

Suppose that's your brain. You might, I guess, be reluctant to experiment on it in any way in place, but you might feel quite comfortable changing EXPLICITFACTSTORAGE_SIZE from 4GB to 8GB, or reimplementing the hardware on a new semiconductor substrate you've designed that lets every component run at twice the speed while remaining within the appropriately-scaled specifications, and making a new instance. If it causes disaster, you can probably tell; if not, you've got a New Smarter You up and running.

Of course, maybe you couldn't tell if some such change caused disasters of a sufficiently subtle kind. That's a reasonable concern. But this isn't an ice-pick-through-the-eye-socket sort of concern, and it isn't the sort of concern that makes it obvious that "recursive self-improvement is not possible".

Comment author: Lumifer 11 January 2016 06:04:43PM 2 points [-]

but you might feel quite comfortable changing EXPLICIT_FACT_STORAGE_SIZE

While I agree with the overall thrust of your comment, this brought to mind an old anecdote...

Comment author: gjm 11 January 2016 09:34:25PM 4 points [-]

Such things are why I said "maybe you couldn't tell if some such change caused disasters of a sufficiently subtle kind".