You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on Connecting Your Beliefs (a call for help) - Less Wrong Discussion

24 Post author: lukeprog 20 November 2011 05:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (73)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 20 November 2011 01:43:34PM *  14 points [-]

if what we are observing doesn't constitute evidence against the Singularity in your opinion, then what would?

I'm not marchdown, but:

Estimating the probability of a Singularity requires looking at various possible advantages of digital minds and asking what would constitute evidence against such advantages being possible. Some possibilities:

  • Superior processing power. Evidence against would be the human brain already being close to the physical limits of what is possible.
  • Superior serial power: Evidence against would be an inability to increase the serial power of computers anymore.
  • Superior parallel power: Evidence against would be an indication of extra parallel power not being useful for a mind that already has human-equivalent (whatever that means) parallel power.
  • Improved algorithms: Evidence against would be the human brain's algorithms already being perfectly optimized and with no further room for improvement.
  • Designing new mental modules: Evidence against would be evidence that the human brain's existing mental modules are already sufficient for any cognitive task with any real-world relevance.
  • Modifiable motivation systems: Evidence against would be evidence that humans are already optimal at motivating themselves to work on important tasks, that realistic techniques could be developed to make humans optimal in this sense, or that having a great number of minds without any akrasia issues would have no major advantage over humans.
  • Copyability: Evidence against would be evidence that minds cannot be effectively copied, maybe because there won't be enough computing power to run many copies. Alternatively, that copying minds would result in rapidly declining marginal returns and that the various copying advantages discussed by e.g. Hanson and Shulman aren't as big as they seem.
  • Perfect co-operation: Evidence against would be that no minds can co-operate better than humans do, or at least not to such an extent that they'd receive a major advantage. Also, evidence of realistic techniques bringing humans to this level of co-operation.
  • Superior communication: Evidence against would be that no minds can communicate better than humans do, or at least not to such an extent that they'd receive a major advantage. Also, evidence of realistic techniques bringing humans to this level of communication.
  • Transfer of skills: Evidence against would be that no minds can teach better than humans do, or at least not to such an extent that they'd receive a major advantage. Also, evidence of realistic techniques bringing humans to this level of skill transfer.
  • Various biases: Evidence against would either be that human cognitive biases are not actually major ones, or that no mind architecture could overcome them. Also, evidence that humans actually have a realistic chance of overcoming most biases.

Depending on how you define "the Singularity", some of these may be irrelevant. Personally, I think the most important aspect of the Singularity is whether minds drastically different from humans will eventually take over, and how rapid the transition could be. Excluding the possibility of a rapid takeover would require at least strong evidence against gains from increased serial power, increased parallel power, improved algorithms, new mental modules, copyability, and transfer of skills. That seems quite hard to come by, especially once you take into account the fact that it's not enough to show that e.g. current trends in hardware development show mostly increases in parallel instead of serial power - to refute the gains from increased serial power, you'd also have to show that this is indeed some deep physical limit which cannot be overcome.

Comment author: XiXiDu 20 November 2011 03:08:00PM 1 point [-]

Superior processing power. Evidence against would be the human brain already being close to the physical limits of what is possible.

It is often cited how much faster expert systems are at their narrow area of expertise. But does that mean that the human brain is actually slower or that it can't focus its resources on certain tasks? Take for example my ability to simulated some fantasy environment, off the top of my head, in front of my mind's eye. Or the ability of humans to run real-time egocentric world-simulations to extrapolate and predict the behavior of physical systems and other agents. Our best computers don't even come close to that.

Superior serial power: Evidence against would be an inability to increase the serial power of computers anymore.

Chip manufacturers are already earning most of their money by making their chips more energy efficient and working in parallel.

Improved algorithms: Evidence against would be the human brain's algorithms already being perfectly optimized and with no further room for improvement.

We simply don't know how efficient the human brain's algorithms are. You can't just compare artificial algorithms with the human ability to accomplish tasks that were never selected for by evolution.

Designing new mental modules: Evidence against would be evidence that the human brain's existing mental modules are already sufficient for any cognitive task with any real-world relevance.

This is an actual feature. It is not clear that you can have a general intelligence with a huge amount of plasticity that would work at all rather than messing itself up.

Modifiable motivation systems: Evidence against would be evidence that humans are already optimal at motivating themselves to work on important tasks...

This is an actual feature, see dysfunctional autism.

Copyability: Evidence against would be evidence that minds cannot be effectively copied, maybe because there won't be enough computing power to run many copies.

You don't really anticipate to be surprised by evidence on this point because your definition of "minds" doesn't even exist and therefore can't be shown not to be copyable. And regarding brains, show me some neuroscientists who think that minds are effectively copyable.

Perfect co-operation: Evidence against would be that no minds can co-operate better than humans do, or at least not to such an extent that they'd receive a major advantage.

Cooperation is a delicate quality. Too much and you get frozen, too little and you can't accomplish much. Human science is a great example of a balance between cooperation and useful rivalry. How is a collective intellect of AGI's going to preserve the right balance without mugging itself into pursuing insane expected utility-calculations?

Excluding the possibility of a rapid takeover would require at least strong evidence against gains...

Wait, are you saying that the burden of proof is with those who are skeptical of a Singularity? Are you saying that the null hypothesis is a rapid takeover? What evidence allowed you to make that hypothesises in the first place? Making up unfounded conjectures and then telling others to disprove them will lead to privileging random high-utility possibilities, that sound superficially convincing, while ignoring other problems that are based on empirical evidence.

...it's not enough to show that e.g. current trends in hardware development show mostly increases in parallel instead of serial power - to refute the gains from increased serial power, you'd also have to show that this is indeed some deep physical limit which cannot be overcome.

All that doesn't even matter. Computational resources are mostly irrelevant when it comes to risks from AI. What you have to show is that recursive self-improvement is possible. It is a question of whether you can dramatically speed up the discovery of unknown unknowns.