Comment author: JonMcGuire 04 September 2013 04:03:52PM 27 points [-]

But, of course, the usual response to any new perspective is "That can't be right, because I don't already believe it."

Eugene McCarthy, Human Origins: Are We Hybrids?

In response to Science as Attire
Comment author: Stuart_Armstrong 23 August 2007 04:38:20PM 0 points [-]

belief as scientific attire, like wearing a lab coat. Science (unlike religion) has proved its myths - by putting men on the moon, mobile phones in people's pockets, and curing diseases. It's payed its dues to reliability. So unless I am willing to look into it myself, I should, as a default, believe most things scientists claim. And, unless I'm willing to study the press extensively, I should defer to uncontradicted press stories about scientific claims, especially if they're repeated. This makes science into a litterary genre, but it's the only real option for non-jounalist non-scientists.

The press has reports with things like "scientists attack creationist teachings", but I've never seen "scientists attack common misconceptions about evolutionary theory". Ergo, bashing creationists is sensible, but worrying about my own poor understanding is not.

As for the issue you identified - attitude towards AI's - there are some other aspects reinforcing people's attitudes (this may be why you found this response so prevalent). There is no theoretical barrier to constructing human level AI's, since humans's exist. We can improve human intelligence in various ways (slightly), so we can improve these AI's too (so slightly better than human is possible too). For super-AIs, on the other hand, nothing exists in the world to show that they are possible. And if someone had created them, or were confident of creating them, this would have been reported in the press. So the man on the street, even without apocalyptic litterature, should conclude that super-AI's are far from today's technology (of course, the apocalyptic litterature doesn't help).

Some people I know then make the same mistake you mentioned - super-AI's aren't science (no repeatable experiment). They are, however, talked about in a scientific language. Hence they must be pseudo-science.

Jonvon, there is only one human superpower. It makes us what we are. It is our ability to think. I'd add speach, empathy, opposable thumb, quite a long lifespan, and superior social skills to the list. All of them have just as much claim to "making us who we are" (though thinking and social skills have the best claim to "making us who we will be").

Comment author: JonMcGuire 16 August 2013 08:36:38PM 0 points [-]

"There is no theoretical barrier to constructing human level AI's, since humans's exist. We can improve human intelligence in various ways (slightly), so we can improve these AI's too (so slightly better than human is possible too). For super-AIs, on the other hand, nothing exists in the world to show that they are possible."

If the "man on the street" has sufficient familiarity with The Wonders of Science to accept human-level AIs as genuinely possible, it seems to me that the sheer boost in simple processing power available to an artificial construct is probably enough to support at least a theoretical acceptance of the possibility of super-AI. If somebody understands technology well enough to allow for near-human AI, I would expect to find that they assume super-AI is not just possible, but trivial ... witness people attributing all sorts of mysterious and nefarious intelligent behavior to Google's on-the-fly search predictions, for example...

In response to Fake Explanations
Comment author: aspera 09 October 2012 04:22:39AM 0 points [-]

Unless I misunderstand, this story is a parable. EY is communicating with a handwaving example that the effectiveness of a code doesn't depend on the alphabet used. In the code used to describe the plate phenomenon, “magic” and “heat conduction” are interchangeable symbols which formally carry zero information, since the coder doesn't use them to discriminate among cases.

I’m sincerely confused as to why comments center on the motivations of the students and the professor. Isn't that irrelevant? Or did EY mean for the discussion to go this way? Does it matter?

In response to comment by aspera on Fake Explanations
Comment author: JonMcGuire 16 August 2013 08:04:24PM 2 points [-]

People focus on the motivations of the students and the professor because the professor's behavior is unorthodox. The students paid good money to learn about physics. As others have mentioned, you can't be too hard on them, they arrive at class expecting a physics lesson, not sleight-of-hand. Consequently, my initial response to the article was that I understood what EY meant to convey, but I thought there were probably other ways to illustrate it that didn't involve the unnecessary "trickery" demonstrated by the professor.

However, upon further reflection, the professor's trickery itself could be characterized as relevant to EY's point. If we completely ignore the proferred "magic explanations" from the students, one might consider the professor's trick a lesson that all the physics education in the world may be inadequate to explain a puzzling observation. In other words, I found it helpful to assume that the professor was also trying to make a point similar to that which EY was making, instead of assuming that the professor just felt like being a jerk that day.

As a bonus, by focusing on the conditions of the scenario instead of just the answers, a student who is smart enough to recognize that their education may be inadequate could still answer "I don't have enough information to explain this," which implies he still believes there is an explanation, which might be a better answer than just "I don't know," which sounds a lot like just giving up.

Comment author: jaibot 25 July 2013 12:10:13AM *  1 point [-]

Once you account for this and other forms of derivatives-are-hard-to-count problems, it's down to "only" a few trillion. I;m unable to find a good estimate regarding what fraction of the "financial system" (squares quotes indicating the term is not well defined here) that is.

Comment author: JonMcGuire 03 August 2013 01:56:55PM *  2 points [-]

WQ's response was great. I have worked in "the financial system" for nearly 20 years and just in terms of real assets under management, most of the handful of large global firms are each managing assets in the low trillions, and those are "real" assets in the sense that they aren't really anything you could say was being counted more than once due to arcane accounting rules or contractual obligations or whatever.

Back around 2007/2008 there were a few numbers in the mid-tens-of-trillions being tossed around in the news that were guesstimates of the total global net worth, and although I forget the numbers now, I would be very surprised if the sum of assets under management was not approximately equal to those numbers. Via trusts and so on, such firms even manage physical assets (paintings, yachts, diamond rings, office buildings) to varying extents that most people don't think about as being managed by "the financial system."

The only place where I'd change WQ's response is to blame regulation more than legal costs. Not that legal is cheap for these firms, but regulation and compliance is the biggest cost-factor by such a large margin that everything else is nearly irrelevant (in fact, legal costs are just a side-effect, for the most part).

Comment author: MugaSofer 13 April 2013 03:18:22PM *  0 points [-]

Huh. You're right, it looks like it went missing - perhaps during the change to MIRI?

I'm afraid I've been unable to find a copy; looks like everyone was linking to that one vanished copy.

EDIT: which is still available at WaybackMachine.

Comment author: JonMcGuire 13 April 2013 03:53:11PM 1 point [-]

Awesome, thanks. Didn't even think to look there... time to wget the whole thing!

Comment author: JonMcGuire 13 April 2013 02:58:41PM 3 points [-]

New to LW... my wife re-ignited my long-dormant interest in AI via Yudkowski's Friendly AI stuff.

Is there a link somewhere to "General Intelligence and Seed AI"? It seems that older content at intelligence.org has gone missing. It actually went missing while my wife was in the middle of reading it online... very frustrating. Friendly AI makes a lot of references to it. Seems important to read it.

I'd prefer a PDF, if somebody knows where to find one.

Thanks!