XiXiDu comments on Why I Moved from AI to Neuroscience, or: Uploading Worms - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (56)
There's the rub! I happen to value technological progress as an intrinsic good, so classifying a Singularity as "positive" or "negative" is not easy for me. (I reject the notion that one can factorize intelligence from goals, so that one could take a superintelligence and fuse it with a goal to optimize for paperclips. Perhaps one could give it a compulsion to optimize for paperclips, but I'd expect it to either put the compulsion on hold while it develops amazing fabrication, mining and space travel technologies, and never completely turn its available resources into paperclips since that would mean no chance of more paperclips in the future; or better yet, rapidly expunge the compulsion through self-modification.) Furthermore, I favor Kurzweil's smooth exponentials over "FOOM": although it may be even harder to believe that not only will there be superintelligences in the future, but that at no point between now and then will an objectively identifiable discontinuity happen, it seems more consistent with history. Although I expect present-human culture to be preserved, as a matter of historical interest if not status quo, I'm not partisan enough to prioritize human values over the Darwinian imperative. (The questions linked seem very human-centric, and turn on how far you are willing to go in defining "human," suggesting a disguised query. Most science is arguably already performed by machines.) In summary, I'm just not worried about AI risk.
The good news for AI worriers is that Eliezer has personally approved my project as "just cool science, at least for now" -- not likely to lead to runaway intelligence any time soon, no matter how reckless I may be. Given that and the fact that I've heard many (probably most) AI-risk arguments, and failed to become worried (quite probably because I hold the cause of technological progress very dear to my heart and am thus heavily biased - at least I admit it!), your time may be better spent trying to convince Ben Goertzel that there's a problem, since at least he's an immediate threat. ;)
I doubt it. I neither believe that people like Jürgen Schmidhuber are a risk, apart from a very abstract possibility.
The reason is that they are unable to show off some applicable progress on a par with IBM Watson or Siri. And in the case that they claim that their work relies on a single mathematical breakthrough, I doubt that it would be justified even in principle to be confident in that prediction.
In short, either their work is incrementally useful or is based on wild speculations about the possible discovery of unknown unknowns.
The real risks in my opinion are 1) that together they make many independent discoveries and someone builds something out of it 2) that a huge company like IBM, or a military project, builds something 3) the abstract possibility that some partly related field like neuroscience, or an unrelated field, provides the necessary insight to put two and two together.
Do you mean that intelligence is fundamentally interwoven with complex goals?
Do you mean that there is no point at which exploitation is favored over exploration?
I am not sure what you mean, could you elaborate? Do you mean something along the lines of what Ben Goertzel says in the following quote:
You further wrote:
What is your best guess at why people associated with SI are worried about AI risk?
If you would have to fix the arguments for the proponents of AI-risk, what would be the strongest argument in favor of it? Also, do you expect there to be anything that could possible change your mind about the topic and become worried?
Essentially, yes. I think that defining an arbitrary entity's "goals" is not obviously possible, unless one simply accepts the trivial definition of "its goals are whatever it winds up causing"; I think intelligence is fundamentally interwoven with causing complex effects.
I mean that there is no point at which exploitation is favored exclusively over exploration.
I'm 20 years old - I don't have any kids yet. If I did, I might very well feel differently. What I do mean is that I believe it to be culturally pretentious, and even morally wrong (according to my personal system of morals), to assert that it is better to hold back technological progress if necessary to preserve the human status quo, rather than allow ourselves to evolve into and ultimately be replaced by a superior civilization. I have the utmost faith in Nature to ensure that eventually, everything keeps getting better on average, even if there are occasional dips due to, e.g., wars; but if we can make the transition to a machine civilization smooth and gradual, I hope there won't even have to be a war (a la Hugo de Garis).
Well, the trivial response is to say "that's why they're associated with SI." But I assume that's not how you meant the question. There are a number of reasons to become worried about AI risk. We see AI disasters in science fiction all the time. Eliezer makes pretty good arguments for AI disasters. People observe that a lot of smart folks are worried about AI risk, and it seems to be part of the correct contrarian cluster. But most of all, I think it is a combination of fear of the unknown and implicit beliefs about the meaning and value of the concept "human".
In my opinion, the strongest argument in favor of AI-risk is the existence of highly intelligent but highly deranged individuals, such as the Unabomber. If mental illness is a natural attractor in mind-space, we might be in trouble.
Naturally. I was somewhat worried about AI-risk before I started studying and thinking about intelligence in depth. It is entirely possible that my feelings about AI-risk will follow a Wundt curve, and that once I learn even more about the nature of intelligence, I will realize we are all doomed for one reason or another. Needless to say, I don't expect this, but you never know what you might not know.
The laws of physics don't care. What process do you think explains the fact that you have this belief? If the truth of a belief isn't what causes you to have it, having that belief is not evidence for its truth.
I'm afraid it was no mistake that I used the word "faith"!
This belief does not appear to conflict with the truth (or at least that's a separate debate) but it is also difficult to find truthful support for it. Sure, I can wave my hands about complexity and entropy and how information can't be destroyed but only created, but I'll totally admit that this does not logically translate into "life will be good in the future."
The best argument I can give goes as follows. For the sake of discussion, at least, let's assume MWI. Then there is some population of alternate futures. Now let's assume that the only stable equilibria are entirely valueless state ensembles such as the heat death of the universe. With me so far? OK, now here's the first big leap: let's say that our quantification of value, from state ensembles to the nonnegative reals, can be approximated by a continuous function. Therefore, by application of Conley's theorem, the value trajectories of alternate futures fall into one of two categories: those which asymptotically approach 0, and those which asymptotically approach infinity. The second big leap involves disregarding those alternate futures which approach zero. Not only will you and I die in those futures, but we won't even be remembered; none of our actions or words will be observed beyond a finite time horizon along those trajectories. So I conclude that I should behave as if the only trajectories are those which asymptotically approach infinity.
Is this a variant of quantum suicide, with "suicide" part replaced by "dead and forgotten in long run, whatever the cause"?
It seems to me like you assume that you have no agency in pushing the value trajectories of alternate futures towards infinity rather than zero, and I don't see why.