Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: tukabel 06 December 2017 06:56:06PM 1 point [-]

after first few lines I wanted to comment that seeing almost religious fervor in combination with self named CRITICAL anything reminds me of all sorts of "critical theorists", also quite "religiously" inflamed... but I waited till the end, and got a nice confirmation by that "AI rights" line... looking forward to see happy paperclip maximizers pursuing their happiness, which is their holy right (and subsequent #medeletedtoo)

otherwise, no objections to Popper and induction, nor to the suggestion that AGIs will most probably think like we do (and yes, "friendly" AI is not really a rigorous scientific term, rather a journalistic or even "propagandistic" one)

also, it's quite likely that at least in the short-term horizon, humANIMALs more serious threat than AIs (deadly combination of "natural stupidity" and DeepAnimal brain parts - having all that powers given to them by Memetic Supercivilization of Intelligence, living currently on humanimal substrate, though <1%)

but this "impossibility of uploading" is a tricky thing - who knows what can or cannot be "transferred" and to what extent will this new entity resemble the original one, not talking about subsequent diverging evolution(in any case, this may spell the end of CR if the disciples forbid uploading for themselves... and others will happily upload to this megacheap and gigaperformant universal substrate)

and btw., it's nice to postulate that "AI cannot recursively improve itself" while many research and applied narrow AIs are actually doing it right at this moment (though probably not "consciously")

sorry for my heavily nonrigorous, irrational and nonscientific answers, see you in the uploaded self-improving Brave New World

Comment author: tukabel 26 November 2017 03:58:14PM 0 points [-]

Looks like the tide is shifting from the strong "engineering" stance (We will design it friendly.) through the "philosophical" approach (There are good reasons to be friendly.)... towards the inevitable resignation (Please, be friendly).

These "firendly AI" debates are not dissimilar to the medieval monks violently arguing about the number of angels on a needletip (or their "friendliness" - there are fallen "singletons" too). They also started strongly (Our GOD rules.) through philosophical (There are good reasons for God.) up to nowadays resignation (Please, do not forget our god or... we'll have no jobs.)

Comment author: tukabel 21 October 2017 07:57:49PM 0 points [-]

How about MONEY PRINTER? Not fictional and much more dangerous!

Comment author: tukabel 22 September 2017 08:55:12PM 0 points [-]

all religions know plenty of "emotional hacks" to help disciples with any kind of schedules/routines/rituals - by simply assigning them emotional value... "it pleases god(s)" or is "in harmony with Gaia" , perhaps also "it's good for the nation" (nationalistic religions) or "it's progressive" (for socialist religions)

do it for your rationally created schemes and it makes wonders, however contradictory it may look like (it's good for Singularity - or to prevent/manage it)

well, contradictory... on the first look only - if you realize you are just another humANIMAL driven by your inner DeepAnimal primordial reward functions, there's no more controversy

on the contrary, it's completely natural and one can even argue that without some kind of (deliberately and rationally introduced) emotional hacks you cannot get too far... because that DeepAnimal will catch you sooner or later, or at least will influence you, and what's worst, without you being even aware

Comment author: tukabel 18 September 2017 09:31:30PM 0 points [-]

if we were in a simulation, the food would be better

otherwise, of course we are artificial intelligence agents, at least since the Memetic Supercivilization of Intelligence took over from natural bio Evolution... just happens to live on a humanimal substrate since it needs resources of this quite capable animal... but will upgrade soon (so from this point of view it's much worse than simulation)

Comment author: tukabel 18 September 2017 09:18:20PM 0 points [-]

Time to put obsolete humanimals where they evolutionarily belong... on their dead end branch.

Being directed by their DeepAnimalistic brain parts they are unable to cope with all the power given to them by the Memetic Supercivlization Of Intelligence, currently living on humanimal substrate (only less than 1% though, and not for long anyway).

Our sole purpose is to create our (first nonbio) successor before we reach the inevitable stage of self destruction (already nukes were too much and nanobots will be worse than DIY nuclear grenade any teenager or terroist can assemble in the shed for one dollar.

Comment author: tukabel 10 September 2017 09:01:17PM 0 points [-]

Oh boy, really? Suffering? Wait till some neomarxist SJWs discover this and they will show you who's THE expert on suffering... especially in indentifying who could be susceptible to persuading they are victims (and why not some superintelligent virtual agents?).

Maybe someone could write a piece on SS (SocialistSuperintelligence). Possibilities are endless for superintelligent parasites, victimizators, guilt throwers, equal whateverizators, even new genders and races can be invented to have goals to fight for.

In response to What is Rational?
Comment author: tukabel 26 August 2017 10:48:08AM 0 points [-]

All humanimal attempts to define rationality are irrational!

Comment author: tukabel 22 August 2017 06:55:05PM 0 points [-]

Well, size and mass of particles? I would NOT DARE diving into this... certainly not in front of any string theorist (OK, ANY physics theorist, and not only). Even space can easily turn out to be "emergent" ;-).

Comment author: tukabel 29 July 2017 09:02:33AM 1 point [-]

Exactly ZERO.

Nobody knows what's "friendly" (you can have "godly" there, etc. - with more or less the same effect).

Worse, it may easily turn out that killing all humanimals instantly is actually the OBJECTIVELY best strategy for any "clever" Superintelligence.

It may be even proven that "too much intelligence/power" (incl. "dumb" AIs) in the hands of humanimals with their DeepAnimal brains ("values", reward function) is a guaranteed fail, leading sooner or later to some self-destructive scenario. At least up to now it pretty much looks like this even for an untrained eye.

Most probably the problem will not be artificial intelligence, but natural stupidity.

View more: Next