You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lessdazed comments on Connecting Your Beliefs (a call for help) - Less Wrong Discussion

24 Post author: lukeprog 20 November 2011 05:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (73)

You are viewing a single comment's thread. Show more comments above.

Comment author: rwallace 20 November 2011 02:53:23PM 4 points [-]

Okay, to look at some of the specifics:

Superior processing power. Evidence against would be the human brain already being close to the physical limits of what is possible.

The linked article is amusing but misleading; the described 'ultimate laptop' would essentially be a nuclear explosion. The relevant physical limit is ln(2)kT energy dissipated per bit erased; in SI units at room temperature this is about 4e-21. We don't know exactly how much computation the human brain performs; middle-of-the-road estimates put it in the ballpark of 1e18 several-bit operations per second for 20 watts, which is not very many orders of magnitude short of even the theoretical limit imposed by thermodynamics, let alone whatever practical limits may arise once we take into account issues like error correction, communication latency and bandwidth, and the need for reprogrammability.

Superior serial power: Evidence against would be an inability to increase the serial power of computers anymore.

Indeed we hit this some years ago. Of course as you observe, it is impossible to prove serial speed won't start increasing again in the future; that's inherent in the problem of proving a negative. If such proof is required, then no sequence of observations whatsoever could possibly count as evidence against the Singularity.

Superior parallel power:

Of course uses can always be found for more parallel power. That's why we humans make use of it all the time, both by assigning multiple humans to a task, and increasingly by placing multiple CPU cores at the disposal of individual humans.

Improved algorithms:

Finding these is (assuming P!=NP) intrinsically difficult; humans and computers can both do it, but neither will ever be able to do it easily.

Designing new mental modules:

As for improved algorithms.

Modifiable motivation systems:

An advantage when they reduce akrasia, a disadvantage when they make you more vulnerable to wireheading.

Copyability: Evidence against would be evidence that minds cannot be effectively copied, maybe because there won't be enough computing power to run many copies.

Indeed there won't, at least initially; supercomputers don't grow on trees. Of course, computing power tends to become cheaper over time, but that does take time, so no support for hard takeoff here.

Alternatively, that copying minds would result in rapidly declining marginal returns and that the various copying advantages discussed by e.g. Hanson and Shulman aren't as big as they seem.

Matt Mahoney argues that this will indeed happen because an irreducible fraction of the knowledge of how to do a job is specific to that job.

Perfect co-operation:

Some of the more interesting AI work has been on using a virtual market economy to allocate resources between different modules within an AI program, which suggests computers and humans will be on the same playing field.

Superior communication:

Empirically, progress in communication technology between humans outpaces progress in AI, and has done so for as long as digital computers have existed.

Transfer of skills:

Addressed under copyability.

Various biases:

Hard to say, both because it's very hard to see our own biases, and because a bias that's adaptive in one situation may be maladaptive in another. But if we believe maladaptive biases run deep, such that we cannot shake them off with any confidence, then we should be all the more skeptical of our far beliefs, which are the most susceptible to bias.

Of course, there is also the fact that humans can and do tap the advantages of digital computers, both by running software on them, and in the long run potentially by uploading to digital substrate.

Comment author: lessdazed 21 November 2011 12:47:43AM 2 points [-]

Empirically, progress in communication technology between humans outpaces progress in AI, and has done so for as long as digital computers have existed.

The best way to colonize Alpha Centauri has always been to wait for technology to improve rather than launching an expedition, but it's impossible for that to continue to be true indefinitely. Short of direct mind-to-mind communication or something with a concurrent halt to AI progress, AI advances will probably outpace human communication advances in the near to medium term.

It seems unreasonable to believe human minds, optimized according to considerations such as politicking in addition to communication, will be able to communicate just as well as designed AIs. Human mind development was constrained by ancestral energy availability and head size, etc., so it's unlikely that we represent optimally sized minds to form a group of minds, even assuming an AI isn't able to reap huge efficiencies by becoming essentially as a single mind, regardless of scale.

Comment author: rwallace 21 November 2011 12:58:09AM 4 points [-]

Or human communications may stop improving because they are good enough to no longer be a major bottleneck, in which case it may not greatly matter whether other possible minds could do better. Amdahl's law: if something was already only ten percent of total cost, improving it by a factor of infinity would reduce total cost by only that ten percent.