I think that "the role of language in human thought" is one of the ways that AI could be very different from us. There is research into the way that different languages affect cognitive abilities (e.g. -- https://psych.stanford.edu/~lera/papers/sci-am-2011.pdf). One of the examples given is that, as a native English-speaker, I may have more difficulty learning the base-10 structure in numbers than a Mandarin speaker because of the difference in the number words used in these languages. Language can also affect memory, emotion, etc.
I'm guessing that an AI's cognitive ability wouldn't change no matter what human language it's using, but I'd be interested to know what people doing AI research think about this.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
If one includes not only the current state of affairs on Earth for predicting when superintelligent AI will occur, but considers the whole of the universe (or at least our galaxy) it raises the question of an AI-related Fermi paradox: Where are they?
I assume that extraterrestrial civilizations (given they exist) which have advanced to a technological society will have accelerated growth of progress similar to ours and create a superintelligent AI. After the intelligence explosion the AI would start consuming energy from planets and stars and convert matter to further its computational power and send out Von Neumann probes (all of this at some probability), which would reach every star of the milky way in well under a million years if travelling at just 10% of the speed of light -- and turn everything into a giant computronium. It does not have to be a catastrophic event for life, a benign AI could spare worlds that harbor life. It would spread magnitudes faster than its biological creators, because it would copy itself faster and travel in many more directions simultanously than them. Our galaxy could/should have been consumed by an evergrowing sphere of AI up to billions of years ago (and that probably many times over by competing AI from various civilizations). But we don't see any traces of such a thing.
Are we alone? Did no one ever create a superintelligent AI? Did the AI and its creators go the other way (ie instead of expanding they choose to retire into a simulated world without interest in growing, going anywhere or contacting anyone)? Did it already happen and are we part or product of it (ie simulation)? Is it happening right in front of us and we, dumb as a goldfish, can't see it?
Should these questions, which would certainly shift the probabilities, be part of AI predictions?
Exploration is a very human activity, it's in our DNA you might say. I don't think we should take for granted that an AI would be as obsessed with expanding into space for that purpose.
Nor Is it obvious that it will want to continuously maximize its resources, at least on the galactic scale. This is also a very biological impulse - why should an AI have that built in?
When we talk about AI this way, I think we commit something like Descartes's Error (see Damasio's book of that name): thinking that the rational mind can function on its own. But our higher cognitive abilities are primed and driven by emotions and impulses, and when these are absent, one is unable to make even simple, instrumental decisions. In other words, before we assume anything about an AI's behavior, we should consider its built in motivational structure.
I haven't read Bostrom's book so perhaps he makes a strong argument for these assumptions that I am not aware of, in which case, could some one summarize them?