Emile comments on A cynical explanation for why rationalists worry about FAI - Less Wrong

25 Post author: aaronsw 04 August 2012 12:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (179)

You are viewing a single comment's thread.

Comment author: Emile 04 August 2012 01:14:16PM *  20 points [-]

I agree with the gist of this (Robin Hanson expressed similar worries), though it's a bit of a caricature. For example:

people who really like to spend their time arguing about ideas on the Internet have managed to persuade themselves that they can save the entire species from certain doom just by arguing about ideas on the Internet

... is a bit unfair, I don't think most SIAI folk consider "arguing about ideas on the Internet" to be of much help except for recruitment, raising funds, and occasionally solving specific technical problems (like some decision theory stuff). It's just that the "arguing about ideas on the Internet" is a bit more prominent because, well, it's on the Internet :)

Eliezer, specifically, doesn't seem to do much arguing on the internet, though he did do a good deal of explaining his ideas on the Internet, which more thinkers should do. And I don't think many of us folks who chat about interesting things on LessWrong are under any illusion that doing so is Helping Save Mankind From Impending Doom.

Comment author: aaronsw 04 August 2012 02:05:43PM 2 points [-]

Yes, "arguing about ideas on the Internet" is a shorthand for avoiding confrontations with reality (including avoiding difficult engineering problems, avoiding experimental tests of your ideas, etc.).

Comment author: [deleted] 05 August 2012 12:18:51AM -2 points [-]

May I refer you to AIXI, which was a potential design for GAI, that was, by these AI researchers, fleshed out mathematically to the point where they could prove it would kill off everyone?

If that isn't engineering, then what is programming (writing math that computers understand)?

Comment author: CarlShulman 05 August 2012 12:51:42AM 17 points [-]

that was, by these AI researchers, fleshed out mathematically

This was Hutter, Schmidhuber, and so forth. Not anyone at SI.

fleshed out mathematically to the point where they could prove it would kill off everyone?

No one has offered a proof of what real-world embedded AIXI implementations would do. The informal argument that AIXI would accept a "delusion box" to give itself maximal sensory reward was made by Eliezer a while ago, and convinced the AIXI originators. But the first (trivial) formal proofs related to that were made by some other researchers (I think former students of the AIXI originators) and presented at AGI-11.

Comment author: lukeprog 05 August 2012 02:12:35AM 9 points [-]

BTW, I believe Carl is talking about Ring & Orseau's Delusion, Survival, and Intelligent Agents.

Comment author: CarlShulman 05 August 2012 02:34:44AM 3 points [-]

Yes, thanks.

Comment author: Alexandros 05 August 2012 05:21:51AM 0 points [-]

So if I read correctly, someone at SI (Eliezer, even) had an original insight into cutting-edge AGI research, one strong enough to be accepted by other cutting-edge AGI researchers, and instead of publishing a proof of it, which was trivial, simply gave it away and some students finally proved it? Or were the discoveries independent?

Because if it the first, SI let a huge, track-record-building accomplishment slip through its hands. A paper like that alone would do a lot to answer Holden's criticism.

Comment author: CarlShulman 05 August 2012 05:31:04AM *  5 points [-]

Or were the discoveries independent?

I'm not sure. If they were connected, it was probably by way of the grapevine via the Schmidhuber/Hutter labs.

SI let a huge, track-record-building accomplishment slip through its hands

Meh, people wouldn't have called it huge, and it isn't, particularly. It would have signaled some positive things, but not much.

Comment author: timtyler 05 August 2012 01:21:33PM 4 points [-]

So if I read correctly, someone at SI (Eliezer, even) had an original insight into cutting-edge AGI research, one strong enough to be accepted by other cutting-edge AGI researchers, and instead of publishing a proof of it, which was trivial, simply gave it away and some students finally proved it?

Surely Hutter was aware of this issue back in 2003:

Another problem connected, but possibly not limited to embodied agents, especially if they are rewarded by humans, is the following: Sufficiently intelligent agents may increase their rewards by psychologically manipulating their human “teachers”, or by threatening them. This is a general sociological problem which successful AI will cause, which has nothing specifically to do with AIXI. Every intelligence superior to humans is capable of manipulating the latter.