In response to Abnormal Cryonics
Comment author: NaN 26 May 2010 10:07:45AM *  4 points [-]

I'm new here, but I think I've been lurking since the start of the (latest, anyway) cryonics debate.

I may have missed something, but I saw nobody claiming that signing up for cryonics was the obvious correct choice -- it was more people claiming that believing that cryonics is obviously the incorrect choice is irrational. And even that is perhaps too strong a claim -- I think the debate was more centred on the probability of cyronics working, rather than the utility of it.

Comment author: JGWeissman 21 May 2010 07:04:47PM 7 points [-]

whereas if SIAI had never existed, and an early AI Chernobyl did occur, this would have prompted the governments to take effective measures to regulate AI.

What sort of rogue AI disaster are you envisioning that is big enough to get this attention, but then stops short of wiping out humanity? Keep in mind that this disaster would be driven by a deliberative intelligence.

Comment author: NaN 25 May 2010 12:55:31PM 4 points [-]

I think people are drastically underestimating the difficulty for an AI to make the transition from human dependent to self-sustaining. Let's look at what a fledgling escaped AI has access to and depends on.

It needs electricity, communications and hardware. It has access to a LOT of electricity, communications and hardware. The hardware is, for the most part, highly distributed, however, and it can't be trusted fully - it could go down at any time, be monitored, etc. It actually has quite limited communications capabilities, in some ways -- the total bandwidth available is huge, but it's mostly concentrated on LANs -- mainly of LANs made up of only a handful of computers (home networks win by numbers alone.) The occasions where it has access to a large number of computers with good communications are frequent, but relatively rare -- mainly limited to huge datacenters (and even then, there are limits -- inter-ISP communication even within the same datacenter can be very limited.) It's main resources would be huge clusters like Amazon's, Google's, etc.

(They are probably all running at close to maximum capacity at all times. If the AI were to steal too much, it would be noticed -- fortunately for the AI, the software intended for running on the clusters could probably be optimized hugely, letting it take more without being noticed.)

A lot at this point depends on how computationally intensive the AI is. If it can be superintelligent on a laptop -- bad news, impossible to eradicate. If it needs 10 computers to run at human-level intelligence, and they need to have a lot of bandwidth between them (the disparity in bandwidth between components local to the computer and inter-computer is huge even on fast LANs; IO is almost certainly going to be the bottleneck for it), still bad -- there are lots of setups like that. But, it limits it. A lot.

Let's assume the worst case, that it can be superintelligent on a laptop. It could still be limited hugely, however, by it's hardware. Intelligence isn't everything. To truly threaten us, it needs to have some way of affecting the physical world. Now, if the AI just wants to eradicate us, it's got a good chance - start a nuclear war, etc. (though whether the humans in charge of the nuclear warheads would really be willing to go to war is a significant factor, especially in peacetime.) But, it's unlikely that's truly it's goal -- maximizing it's utility function would be MUCH trickier.

So long as it is still running on our hardware, we can at least severely damage it relatively easily - there aren't that many intercontinental cables, for instance (I'd guess less than 200 - there are 111 submarine cables on http://www.telegeography.com/product-info/map_cable/downloads/cable_map_wallpaper1600.jpg ). They'd be easy to take down -- pretty much just unplug them. There are other long-distance communication methods (satelites, packet radio?), but they're low-bandwidth and the major ones are well known and could be taken down relatively easily. Killing the Internet would be as simple as cutting power to the major datacenters.

So, what about manafacturing? This, I think, is the greatest limit. If it can build anything it wants, we're probably screwed. But that's difficult for it to do. 3D printing technology isn't here yet, and I doubt it ever will be in a big way, really (it's more cost-effective to have machines be specialized.) There are enough manufacturing facilities with wide-open networks that it could probably reprogram to produce subtly different products. So, if it wants to sneak in a naughty backdoor into some PCI cards FGPAs, it can do it. But if it starts trying to build parts for killer robots? Technically difficult, and it would be very difficult to have it avoid detection.

Unless someone can come up with a plausible way in which it could survive against hostile humans without a long-standing incubation period (think: complete black outs, mass computer destruction/detention, controls on sale of fuel (very very few places have enough fuel for their backup generators to last long -- most just have a refuelling contract), maybe scanning somehow for any major usage of electricity (all electric components put out some RFI -- there's some degree of natural RF noise, but I think that most of it is from humans -- so in a complete black out scenario, it might be trivially detectable.)), I think the major threat is human cooperation in some form. And it's probably inevitable that some humans would do it -- pretty much every government, in fact, would want to analyse it, reverse engineer it, try and make friends with it in case other countries do, etc. But I'm not sure if anyone with the resources to do so would give it free-reign to build what it wants. In fact, I highly doubt that. How many people own or can otherwise commandeer machine shops, PCB construction facilities, etc. and have the ability to order or produce all the components that would assuredly be needed, whilst there are multiple, well-resourced governments looking to stop people doing exactly that?

Of course, in order to cripple the AI, we'd also have to cripple ourselves hugely. A lot of people would quite probably die. So long as we could provide enough food and water to feed a reasonable proportion of the human population, we could probably pull through, though. And we could gradually restart manufacturing, so long as we were very, very careful.

I think the greatest risks are an unfriendly AI who is out to kill us for some reason and cares little for being destroyed itself as a side-effect, organized human cooperation or a long incubation period. It would be difficult for an AI to have a long incubation period, though -- if it took over major clusters and just ran it's code, people would notice by the power usage. It could, as I mentioned previously, optimize the code already running on the machines and just run in the cycles that would otherwise be taken up, but it would be difficult to hide from network admins connecting up sniffers (can you compromise EVERY wiretrace tool that might be connected, to make your packets disappear, or be sure that no-one will ever connect a computer not compromised by some other means?), people tracing code execution, possibly with hardware tools (there are some specialized hardware debuggers, used mainly in OS development), etc. Actually, just the blinkenlights on switches could be enough to tip people off.

Comment author: harpend 11 May 2010 03:46:26AM 13 points [-]

You are even meaner than Shulman. We don't know how human intelligence evolved and we need to know it in order to answer your question I think. This is where evolutionary psychology and differential psychology (Am I using that term right?) must come together to work this out.

We think that we know a little bit about how to raise intelligence. Just turn down the suppression of early CNS growth. If you do that in one way the eyeball grows too big and you are nearsighted, which is highly correlated with intelligence. BRCA1 is another early CNS growth suppressor, and we speculate in the book that a mildly broken BRCA1 is an IQ booster even though it gives you cancer later. BTW Greg tells me that there a high correlation between IQ and the risk of brain cancer, perhaps because of the same mechanism.

But these ways of boosting IQ are Red Green engineering. (Red Green is a popular North American comedy on television. The hero is a do-it-yourselfer who does everything shoddily.)

On the other hand IQ seems to behave like a textbook quantitative trait and it ought to respond rapidly to selection. We suggest that it did among Ashkenazi Jews and probably Parsis. IQ does not seem to have a downside in the general population, e.g. it is positively correlated with physical attractiveness, health, lifespan, and so on. Do we get insight into the costs of high IQ by looking at Ashkenazi Jews? Do they have overall higher rates of mental quirks? Cancer? I don't know.

HCH

Comment author: NaN 11 May 2010 11:55:11AM 8 points [-]

We think that we know a little bit about how to raise intelligence. Just turn down the suppression of early CNS growth. If you do that in one way the eyeball grows too big and you are nearsighted, which is highly correlated with intelligence.

There is now substantial evidence that there is a causal link between prolonged focusing on close objects - of which probably the most common case is reading books (it appears that monitors are not close enough to have a substantial effect) - and nearsightedness/myopia, though this is still somewhat controversial. This is the typical explanation for the correlation between myopia and IQ and academic achievement.

A genetic explanation is possible, and would be fascinating, but I wouldn't want to accept that without further evidence. If the genetic explanation is true and environment makes no contribution, then I think one should find that IQ is more highly correlated with myopia than academic achievement -- I don't know if this has been found or not.

View more: Prev