JonMcGuire

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

But, of course, the usual response to any new perspective is "That can't be right, because I don't already believe it."

Eugene McCarthy, Human Origins: Are We Hybrids?

"There is no theoretical barrier to constructing human level AI's, since humans's exist. We can improve human intelligence in various ways (slightly), so we can improve these AI's too (so slightly better than human is possible too). For super-AIs, on the other hand, nothing exists in the world to show that they are possible."

If the "man on the street" has sufficient familiarity with The Wonders of Science to accept human-level AIs as genuinely possible, it seems to me that the sheer boost in simple processing power available to an artificial construct is probably enough to support at least a theoretical acceptance of the possibility of super-AI. If somebody understands technology well enough to allow for near-human AI, I would expect to find that they assume super-AI is not just possible, but trivial ... witness people attributing all sorts of mysterious and nefarious intelligent behavior to Google's on-the-fly search predictions, for example...

People focus on the motivations of the students and the professor because the professor's behavior is unorthodox. The students paid good money to learn about physics. As others have mentioned, you can't be too hard on them, they arrive at class expecting a physics lesson, not sleight-of-hand. Consequently, my initial response to the article was that I understood what EY meant to convey, but I thought there were probably other ways to illustrate it that didn't involve the unnecessary "trickery" demonstrated by the professor.

However, upon further reflection, the professor's trickery itself could be characterized as relevant to EY's point. If we completely ignore the proferred "magic explanations" from the students, one might consider the professor's trick a lesson that all the physics education in the world may be inadequate to explain a puzzling observation. In other words, I found it helpful to assume that the professor was also trying to make a point similar to that which EY was making, instead of assuming that the professor just felt like being a jerk that day.

As a bonus, by focusing on the conditions of the scenario instead of just the answers, a student who is smart enough to recognize that their education may be inadequate could still answer "I don't have enough information to explain this," which implies he still believes there is an explanation, which might be a better answer than just "I don't know," which sounds a lot like just giving up.

WQ's response was great. I have worked in "the financial system" for nearly 20 years and just in terms of real assets under management, most of the handful of large global firms are each managing assets in the low trillions, and those are "real" assets in the sense that they aren't really anything you could say was being counted more than once due to arcane accounting rules or contractual obligations or whatever.

Back around 2007/2008 there were a few numbers in the mid-tens-of-trillions being tossed around in the news that were guesstimates of the total global net worth, and although I forget the numbers now, I would be very surprised if the sum of assets under management was not approximately equal to those numbers. Via trusts and so on, such firms even manage physical assets (paintings, yachts, diamond rings, office buildings) to varying extents that most people don't think about as being managed by "the financial system."

The only place where I'd change WQ's response is to blame regulation more than legal costs. Not that legal is cheap for these firms, but regulation and compliance is the biggest cost-factor by such a large margin that everything else is nearly irrelevant (in fact, legal costs are just a side-effect, for the most part).

Awesome, thanks. Didn't even think to look there... time to wget the whole thing!

New to LW... my wife re-ignited my long-dormant interest in AI via Yudkowski's Friendly AI stuff.

Is there a link somewhere to "General Intelligence and Seed AI"? It seems that older content at intelligence.org has gone missing. It actually went missing while my wife was in the middle of reading it online... very frustrating. Friendly AI makes a lot of references to it. Seems important to read it.

I'd prefer a PDF, if somebody knows where to find one.

Thanks!