All of John Kluge's Comments + Replies

1HumaneAutomation
That there is no such thing as being 100% objective/rational does not mean one can't be more or less rational than some other agent. Listen. Why do you have a favorite color? How come you prefer leather seats? In fact, why did you have tea this morning instead of coffee? You have no idea. Even if you do (say, you ran out of coffee) you still don't know why you decided to drink tea instead of running down to the store to get some coffee instead. We are so irrational that we don't actually even know ourselves why most of the things we think, believe, want or prefer are such things. The very idea of liking is irrational. And no, you don't "like" a Mercedes more than a Yugo because it's safer - that's a fact, not a matter of opinion. A "machine" can also give preference to a Toyota over a Honda but it certainly wouldn't do so because it likes the fabric of the seats, or the fact the tail lights converge into the bumper so nicely. It will list a bunch of facts and parameters and calculate that the Toyota is the thing it will "choose". We humans delude ourselves that this is how we make decisions but this is of course complete nonsense. Naturally, some objective aspects are considered like fuel economy, safety, features and options... but the vast majority of people end up with a car that far outstrips their actual, objective transportation needs, and most of that part is really about status, how having a given car makes you feel compared to others in your social environment and what "image" you (believe you) project on those whose opinion matters most to you. An AI will have none of these wasteful obsessive compulsions. Look - be honest with yourself Mr. Kluge. Please. Slow down, think, feel inside. Ask yourself - what makes you want... what makes you desire. You will, if you know how to listen... very soon discover none of that is guided by rational, dispassionate arguments or objective, logical realities. Now imagine an AI/machine that is even half as smart as the a
8Jonathan Claybrough
I don't think reasoning about others' beliefs and thoughts is helping you be correct about the world here. Can you instead try to engage with the arguments themselves and point out at what step you you don't see a concrete way for that to happen ?  You don't show much sign of having read the article so I'll copy paste the part with explanations of how AIs start acting in the physical space. So is there anything here you don't think is possible ?  Getting human allies ? Being in control of large sums of compute while staying undercover ? Doing science, and getting human contractors/allies to produce the results ? etc
6Aay17ush
The way you use intelligence is different from how many people here using that word mean it.  Check this out (for a partial understanding of what they mean): https://www.lesswrong.com/posts/aiQabnugDhcrFtr9n/the-power-of-intelligence
1TAG
The can and the will are separate arguments, but the case has been made for both.

Oh really? Will it have the ability to run an entire lab robotically to do that? If not, then it won't be the AI doing anything. It will be the people doing it. Its power to do anything in the physical world only exists to the extent humans are willing to grant it. 

1LatticeDefect
There are can order at least 10k-basepair DNA synthesis online, longer sequences are "call to get a quote" on the sites I found. The smallest synthetic genome for a viable self-replicating bacterium is 531kb. The genome for a virus would be even smaller. My understanding is that there are existing processes to encapsulate genes into virus shells from other species for gene therapy purposes. That leaves the logistics of buying both services, hooking them up and getting the particles injected into some lab animals. It doesn't look trivial, but less complicated than buying an entire nuclear arsenal.
John Kluge-4-12

If the US doesn't develop it, you can be assured that China and Russia will. US scientists are likely to develop it more quickly. Assuming it is possible, Chinese and Russian scientists, given enough time and resources will develop it eventually. If it is possible, there is no stopping it from happening. Someone will do it. It is pointless to pretend otherwise. 

2Zack Sargent
There's a joke in the field of AI about this. Q: How far behind the US is China in AI research? A: About 12 hours.

I live in the physical world. For a computer program to kill me, it has to have power over the physical world and some physical mechanism to do that. So, anyone claiming that AI is going to destroy humanity needs to explain the physical mechanism by which that will happen. This article like every other one I have seen making that argument fails to do that. 

3jacob_cannell
One likely way AI kills humanity is indirectly, by simply outcompeting us. They become more intelligent, their consciousness is recognized in at least some jurisdictions, those jurisdictions experience rapid unprecedented technological and economic growth and become the new superpowers, less and less of world GDP goes to humans, we diminish.
1Archimedes
One of the simplest ways for AI to have power over the physical world is via humans as pawns. A reasonably savvy AI could persuade/manipulate/coerce/extort/blackmail real-life people to carry out the things it needs help with. Imagine a powerful mob boss who is superintelligent, never sleeps, and continuously monitors everyone in their network.
1Roman Leventov
For superintelligent AI, it will be trivial to orchestrate engineered superpandemics that will kill 90+% of people, finishing off the disorganised rest will be easy.
RaemonModerator Comment4238

I want to step in here as a moderator. We're getting a substantial wave of new people joining the site who aren't caught up on all the basic arguments for why AI is likely to be dangerous. 

I do want people with novel critiques of AI to be able to present them. But LessWrong is a site focused on progressing the cutting edge of thinking, and that means we can't rehash every debate endlessly. This comment makes a lot of arguments that have been dealt with extensively on this forum, in the AI box experiment, Cold Takes, That Alien Message, So It Looks Lik... (read more)

4HumaneAutomation
I think you're making a number of flawed assumptions here Sir Kluge. 1) Uncontrollability may be an emergent property of the G in AGI. Imagine you have a farm hand that works super fast, does top quality work but now and then there just ain't nothing to do so he goes for a walk, maybe flirts around town, whatever. That may not be that problematic, but if you have a constantly self-improving AI that can give us answers to major massive issues that we then have to hope to implement in the actual world... chances are that it will have a lot of spare time on its hands for alternative pursuits... either for "itself" or for its masters... and they will not waste any time grabbing max advantage in min time, aware they may soon face a competing AGI. Safeguards will just get in the way, you see. 2) Having the G in AGI does not at all have to mean it will then become human in the sense it has moods, emotions or any internal "non-rational" state at all. It can, however, make evaluations/comparisons of its human wannabe-overlords and find them very much inferior, infinitely slower and generally rather of dubious reliability. Also, they lie a lot. Not least to themselves. If the future holds something of a Rationality-rating akin to a Credit rating, we'd be lucky to score above Junk status; the vast majority of our needs, wants, drives and desires are all based on wanting to be loved by mommy and dreading death. Not much logic to be found there. One can be sure it will treat us as a joke, at least in terms of intellectual prowess and utility. 3) Any AI we design that is an AGI (or close to it) and has "executive" powers will almost inevitably display collateral side-effects that may run out of control and cause major issues. What is perhaps even more dangerous is an A(G)I that is being used in secret or for unknown ends by some criminal group or... you know... any "other guys" who end up gaining an advantage of such enormity that "the world" would be unable to stop, control o