Posts

Sorted by New

Wiki Contributions

Comments

Exploration is a very human activity, it's in our DNA you might say. I don't think we should take for granted that an AI would be as obsessed with expanding into space for that purpose.

Nor Is it obvious that it will want to continuously maximize its resources, at least on the galactic scale. This is also a very biological impulse - why should an AI have that built in?

When we talk about AI this way, I think we commit something like Descartes's Error (see Damasio's book of that name): thinking that the rational mind can function on its own. But our higher cognitive abilities are primed and driven by emotions and impulses, and when these are absent, one is unable to make even simple, instrumental decisions. In other words, before we assume anything about an AI's behavior, we should consider its built in motivational structure.

I haven't read Bostrom's book so perhaps he makes a strong argument for these assumptions that I am not aware of, in which case, could some one summarize them?

Lera Boroditsky is one of the premier researchers on this topic. They've also done some excellent work on comparing spatial/time metaphors in English and Mandarin (?), showing that the dominant idioms in each language affect how people cognitively process motion.

But the question is more broad -- whether some form of natural language is required (natural, roughly meaning used by a group in day to day life, is key here)? Differences between major natural languages are for the most part relatively superficial and translatable because their speakers are generally dealing with a similar reality.

A different (non-technical) way to argue for their reducibility is through analysis of the role of language in human thought. The logic being that language by its very nature extends into all aspects of cognition (little human though of interest takes place outside its reach), and so one cannot do one without the other. I believe that's the rationale behind the Turing test.

It's interesting that you mention machine translation though. I wouldn't equate that with language understanding. Modern translation programs are getting very good, and may in time be "perfect" (indistinguishable from competent native speakers), but they do this through pattern recognition and leveraging a massive corpus of translation data - not through understanding it.

I think the best bets as of today would be truly cheap energy (whether through fusion, ubiqutious solar, etc) and nano-fabrication. Though it may not happen, we could see these play out in 20-30 year term.

The bumps from this, however would be akin to the steam engine. Dwarfed by (or possibly a result of) the AI.

Oh, I completely agree with the prediction of explosive growth (or at least its strong likelihood), I just think (1) or something like it is a much better argument than 2 or 3.

I'll take a stab at it.

We are now used to saying that light is both a particle and a wave. We can use that proposition to make all sorts of useful predictions and calculations. But if you stop and really ponder that for a second, you'll see that it is so far out of the realm of human experience that one cannot "understand" that dual nature in the sense that you "understand" the motion of planets around the sun. "Understanding" in the way I mean is the basis for making accurate analogies and insight. Thus I would argue Kepler was able to use light as an analogy to 'gravity' because he understood both (even though he didn't yet have the math for planetary motion)

Perhaps an even better example is the idea of quantum entanglement: theory may predict, and we may observe quarks "communicating" at a distance faster than light, but (for now at least) I don't think we have really incorporate it into our (pre-symbolic) conception of the world.

I think Google is still quite aways from AGI, but in all seriousness, if there was ever a compelling interest of national security to be used as a basis for nationalizing inventions, AGI would be it. At the very least, we need some serious regulation of how such efforts are handled.

I find the whole idea of predicting AI-driven economic growth based on analysis of all of human history as a single set of data really unconvincing. It is one thing to extrapolate up-take patterns of a particular technology based on similar technologies in the past. But "true AI" is so broad, and, at least on many accounts, so transformative, that making such macro-predictions seems a fool's errand.

Again, this is a famous one, but Watson seems really impressive to me. It's one thing to understand basic queries and do a DB query in response, but its ability to handle indirect questions that would confuse many a person (guilty), was surprising.

On the other hand, its implementation (as described in Second Machine Age) seems to be just as algorithmic, brittle and narrow as Deep Blue - basically Watson was as good as its programmers...

Another way to get at the same point, I think, is - Are there things that we (contemporary humans) will never understand (from a Quora post)?

I think we can get some plausible insight on this by comparing an average person to the most brilliant minds today - or comparing the earliest recorded examples of reasoning in history to that of modernity. My intuition is that there are many concepts (quantum physics is a popular example, though I'm not sure it's a good one) that even most people today, and certainly in the past, will never comprehend, at least without massive amounts of effort, and possibly even then. They simply require too much raw cognitive capacity to appreciate. This is at least implicit in the Singularity hypothesis.

As to the energy issue, I don't see any reason to think that such super-human cognition systems necessarily requires more energy - though they may at first.

Load More