Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

tukabel comments on The Critical Rationalist View on Artificial Intelligence - Less Wrong Discussion

0 Post author: Fallibilist 06 December 2017 05:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (65)

You are viewing a single comment's thread.

Comment author: tukabel 06 December 2017 06:56:06PM 1 point [-]

after first few lines I wanted to comment that seeing almost religious fervor in combination with self named CRITICAL anything reminds me of all sorts of "critical theorists", also quite "religiously" inflamed... but I waited till the end, and got a nice confirmation by that "AI rights" line... looking forward to see happy paperclip maximizers pursuing their happiness, which is their holy right (and subsequent #medeletedtoo)

otherwise, no objections to Popper and induction, nor to the suggestion that AGIs will most probably think like we do (and yes, "friendly" AI is not really a rigorous scientific term, rather a journalistic or even "propagandistic" one)

also, it's quite likely that at least in the short-term horizon, humANIMALs more serious threat than AIs (deadly combination of "natural stupidity" and DeepAnimal brain parts - having all that powers given to them by Memetic Supercivilization of Intelligence, living currently on humanimal substrate, though <1%)

but this "impossibility of uploading" is a tricky thing - who knows what can or cannot be "transferred" and to what extent will this new entity resemble the original one, not talking about subsequent diverging evolution(in any case, this may spell the end of CR if the disciples forbid uploading for themselves... and others will happily upload to this megacheap and gigaperformant universal substrate)

and btw., it's nice to postulate that "AI cannot recursively improve itself" while many research and applied narrow AIs are actually doing it right at this moment (though probably not "consciously")

sorry for my heavily nonrigorous, irrational and nonscientific answers, see you in the uploaded self-improving Brave New World

Comment author: Fallibilist 07 December 2017 12:17:43AM 0 points [-]

and btw., it's nice to postulate that "AI cannot recursively improve itself" while many research and applied narrow AIs are actually doing it right at this moment (though probably not "consciously")

Please quote me accurately. What I wrote was:

AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have

I am not against the idea that an AI can become smarter by learning how to become smarter and recursing on that. But that cannot lead to more knowledge creation potential than humans already have.