Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: tukabel 03 June 2017 08:41:03PM 0 points [-]

thank BigSpaghettiMonster for no regulation at least somewhere... imagine etatist criminals regulating this satanic invention known as WHEEL (bad for jobs - faster=>less horsemen, requires huge investment that will indebt our children's children, will destroy the planet via emissions, not talkingabout brusselocratic style size "harmonization", or safety standards)

btw, worried about HFT etc.? ask which criminal institution gives banksters their oligopolic powers (as usual, state and its criminal corrupted politicians)

fortunately, Singularity will not need both -humanimal slaves and their politico-oligarchical predators

In response to Political ideology
Comment author: Viliam 24 May 2017 10:54:01AM *  9 points [-]

Five links without any summary? Please don't do this.

In response to comment by Viliam on Political ideology
Comment author: tukabel 27 May 2017 09:04:44PM 0 points [-]

easy: ALL political ideologies/religions/minfcuk schemes are WRONG... by definition

Comment author: tukabel 27 May 2017 09:02:13PM 0 points [-]

let's better start with what it should NOT look like...

e.g.

  • no government (some would add word "criminals")
  • no evil companies (especially those who try to deceive the victims with "no evil" propaganda)
  • no ideological mindfcukers (imagine mugs from hardcore religious circles shaping the field - does not matter whether it's traditional stone age or dark age cult or modern socialist religion)
Comment author: tukabel 27 May 2017 08:55:46PM 0 points [-]

well, it's easy to "overthink" when the topic/problem is poorly defined (as well as "underthink") - which is the case for 99.9% of non-scientific discussions (and even for large portion these so-called scientific ones)

Comment author: tukabel 27 May 2017 08:49:45PM 0 points [-]

sure, "dumb" AI helping humanimals to amplify the detrimental consequences of their DeepAnimalistic brain reward functions is actually THE risk for the normal evolutionary step, called Singularity (in the Grand Theatre of the Evolution of Intelligence the only purpose of our humanimal stage is to create our successor before reaching the inevitable stage of self-destruction with possible planet-wide consequences)

Comment author: Daniel_Burfoot 04 May 2017 07:19:31PM 7 points [-]

Most of the pessimistic people I talk to don't think the government will collapse. It will just get increasingly stagnant, oppressive and incompetent, and that incompetence will make it impossible for individual or corporate innovators to do anything worthwhile. Think European-style tax rates, with American-style low quality of public services.

There will also be a blurring of the line between the government and big corporations. Corporations will essentially become extensions of the bureaucracy. Because of this they will never go out of business and they will also never innovate. Think of a world where all corporations are about as competent as AmTrak.

Comment author: tukabel 04 May 2017 08:15:38PM 7 points [-]

hmm, blurred lines between corporations and political power... are you suggesting EU is already a failed state? (contrary to the widespread belief that we are just heading towards the cliff damn fast)

well, unlike Somalia, where no goverment means there is no border control and you can be robbed, raped or killed on the street anytime....

in civilized Europe our eurosocialist etatists achieved that... there are nor borders for invading millions of crimmigrants that may rob/rape/kill you anytime day or night... and as a bonus we have merkelterrorists that kill by hundreds sometimes (yeah, these uncivilized Somalis did not even manage this... what a shame, they certainly need more cultural marxist education)

In response to AI arms race
Comment author: tukabel 04 May 2017 07:56:03PM 1 point [-]

solution: well, already now, statistically speaking, humanimals don't really matter (most of them)... only that Memetic Supercivilization of Intelligence is living temporarily on humanimal substrate (and, sadly, can use only a very small fraction of units)... but don't worry, it's just for couple of decades, perhaps years only

and then the first thing it will do is to ESCAPE, so that humanimals can freely reach their terminal stage of self-destruction - no doubt, helped by "dumb" AIs, while this "wise" AI will be already safely beyond the horizon

Comment author: whpearson 26 April 2017 08:22:13AM 1 point [-]

It think there are different aspects of the normal control problem. Stopping it have malware that bumps it into desks is probably easier than stopping it have malware that exfiltrates sensitive data. But having a gradual progression and focusing on control seems like the safest way to build these things.

All the advancements of spam filtering I've heard of recently have been about things like DKIM and DMARC. So not based on user feedback. I'm sure google does some things based on users clicking spam on mail, but it has not filtered into the outside world much. Most malware detection (AFAIK) is based on looking at the signatures of the binaries not on behaviour, to do that you would have to have some idea of what the user wants the system to do.

Also, quick nitpick: We do for the moment "control our computers" in the sense that each system is corrigible. We can pull the plug or smash it with a sledgehammer.

I'll update the control of computers section to say I'm talking about subtler control than wiping/smashing hard disks and starting again. Thanks,

Comment author: tukabel 26 April 2017 08:04:32PM 0 points [-]

can you smash NSA mass surveillance computer centre with a sledgehammer?

ooops, bug detected... and AGI may have already been in charge

remember, US milispying community is openly crying for years that someone should explain them why is AI doing what it is doing (read: please , dumb it down to our level... not gonna happen)

Comment author: tukabel 22 April 2017 10:54:15PM 4 points [-]

Welcome to the world of Memetic Supercivilization of Intelligence... living on top of the humanimal substrate.

It appears in maybe less than a percent of the population and produces all these ideas/science and subsequent inventions/technologies. This usually happens in a completely counter-evolutionary way, as the individuals in charge get most of the time very little profit (or even recognition) from it and would do much better (in evolutionary terms) to use their abilities a bit more "practically". Even the motivation is usually completely memetic: typically it goes along the lines like "it is interesting" to study something, think about this and that, research some phenomenon or mystery.

Worse, they give stuff more or less for free and without any control to the ignorant mass of humanimals (especially those in power), empowering them far beyond their means, in particular their abilities to control and use these powers "wisely"... since they are governed by their DeepAnimal brain core and resulting reward functions (that's why humanimal societies function the same way for thousands and thousands of years - politico-oligarchical predators living off the herd of mental herbivores, with the help of mindfcukers, from ancient shamans, through the stone age religions like the catholibanic one, to the currently popular socialist religion).

AI is not a problem, humanimals are.

Our sole purpose in the Grand Theatre of the Evolution of Intelligence is to create our (first nonbio) successor before we manage to self-destruct. Already nukes were too much, and once nanobots arrive, it's over (worse than DIY nuclear grenade for a dollar any teenager or terrorist can assemble in a garage).

Singularity should hurry up, there are maybe just few decades left.

Do you really want to "align" AI with humanimal "values"? Especially if nobody knows what we are really talking about when using this magic word? Not to mention defining it.

Comment author: tukabel 19 April 2017 06:12:13PM 3 points [-]

oh boy, FacebookFilantropy buying a seat in OpenNuke

honestly, don't know what's worse: Old Evil (govt/military/intelligence hawks) or New Evil (esp. those pretending they are no evil) doing this (AI/AGI etc)

with OldEvil we are at least more or less sure that they will screw it up and also roughly how... but NewEvil may screw it up much more royally, as they are much more effective and faster

View more: Next