PaulAlmond comments on Exploitation and cooperation in ecology, government, business, and AI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (43)
Physics is local. The speed of light is a derivative of that general principle. The local nature of our universe implies some strict limits on intelligence. Curiously, it looks like the only way to transcend these limits (to get a really powerful single intelligence/computer) is to collapse into a black hole, at which point you necessarily seal yourself off and give up any power in this universe. Interesting indeed.
But I have no idea how you leap to the conclusion "there is therefore no reason to expect individuals to exist in a post-AI society." Although partly because I dont know what a post-AI society is. I understand post-human .. but post-AI? Is that the next thing after the next thing? That seems to be getting ahead of ourselves.
Also, you seem to reach the conclusion that there will not necessarily be any individuality in the 'post-AI' future society, but then give several good reasons why such individuality may persist. (namely, speed of light, locality of physics)
But what is individuality? One could say that we are a global consciousness today with just the "bulk of computation" in "small, local units".
I don't think a really big computer would have to collapse into a black hole, if that is what you are saying. You could build an active support system into a large computer. For example, you could build it as a large sphere with circular tunnels running around inside it, with projectiles continually moving around inside the tunnels, kept away from the tunnel walls by a magnetic system, and moving much faster than orbital velocity. These projectiles would exert an outward force against the tunnel walls, through the magnetic system holding them in their trajectories around the tunnels, opposing gravitational collapse. You could then build it as large as you like - provided you are prepared to give up some small space to the active support system and are safe from power cuts.
What is the problem with whoever voted that down? There isn't any violation of laws of nature involved in actively supporting something against collapse like that - any more than there is with the idea that inertia keeps an orbiting object up off the ground. While it would seem to be difficult, you can assume extreme engineering ability on the part of anyone building a hyper-large structure like that in the first place. Maybe I could have an explanation of what the issue is with it? Did I misunderstand the reference to computers collapsing into black holes, for example?
hyper-large structures are hyper-slow and hyper-dumb. See my above reply. The future of computation is to shrink forever. I didn't downvote your comment btw.
The general idea is that because of the speed of light limitation, a computer's maximum speed and communication efficiency is always inversely proportional to its size.
The ultimate computer is thus necessarily dense to the point of gravitational collapse. See seth lloyd's limits of computation paper for the details.
Any old hum-dum really big computer wouldn't have to collapse into a big hole - but any ultimate computer would have to. In fact, the size of the computer isn't even an issue. The ultimate configuration of any matter (in theory) for computation must have ultimately high density to maximum speed and minimize inter-component delay.
What about the uncertainty principle as component size decreases?
look up seth lloyd and on his wikipedia page the 1st link down there is "ultimate physical limits of computation"
the uncertainty principle limits the maximum information storage per gram of mass and the maximum computation rate in terms of bit ops per energy unit, he discusses all that.
However, the uncertainty principle is only really a limitation for classical computers. A quantum computer doesn't have that issue (he discusses classical only, an ultimate quantum computer would be enormously more powerful)