Desrtopa comments on No Universally Compelling Arguments in Math or Science - Less Wrong

30 Post author: ChrisHallquist 05 November 2013 03:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (227)

You are viewing a single comment's thread. Show more comments above.

Comment author: Desrtopa 05 November 2013 09:02:24PM -1 points [-]

That doesn't actually answer the question at all.

This is one of the key ways in which our development of technology differs from an ecosystem. In an ecosystem, mutations are random, and selected entirely on the effectiveness of their ability to propagate themselves in the gene pool. In The development of technology, we do not have random mutations, we have human beings deciding what does or does not seem like a good idea to implement in technology, and then using market forces as feedback. This fails to get us around a) the difficulty of humans actually figuring out strict formalizations of our desires sufficient to make a really powerful AI safe, and b) failure scenarios resulting in "oops, that killed everyone."

The selection process we actually have does not offer us a single do-over in the event of catastrophic failure, nor does it rigorously select for outputs that, given sufficient power, will not fail catastrophically.

Comment author: TheAncientGeek 05 November 2013 09:09:30PM *  -1 points [-]

There is no problem of strict formulation, because that is not what I am aiming at, it's your assumption.

I am aware that the variation isn't random. I don't think that is significant.

I don't think sudden catastrophic failure is likely in incremental/evolutionary progress.

I don't think mathematical "proof" is going to be as reliable as you think, given the complexity.

Comment author: Desrtopa 05 November 2013 09:25:43PM *  0 points [-]

One of the key disanalogies between your "ecosystem" formulation and human development of technology is that natural selection isn't an actor subject to feedback within the system.

If an organism develops a mutation which is sufficiently unfavorable to the Blind Idiot God, the worst case scenario is that it's stillborn, or under exceptional circumstances, triggers an evolution to extinction. There is no possible failure state where an organism develops such an unfavorable mutation that evolution itself keels over dead.

However, in an ecosystem where multiple species interrelate and impose selection effects on each other, then a sudden change in circumstances for one species can result in rapid extinction for others.

We impose selection effects on technology, but a sudden change in technology which kills us all would not be a novel occurrence by the standards of ecosystem operation.

ETA: It seems that your argument all along has boiled down to "We'll just deliberately not do that" when it comes to cases of catastrophic failure. But the argument of Eliezer and MIRI all along has been that such catastrophic failure is much, much harder to avoid than it intuitively appears.