Comment author: olalonde 25 April 2012 05:05:05PM 2 points [-]

This suggests that intelligence is an externality, like pollution.

This sentence doesn't really make sense. Intelligence in itself is not a "cost imposed to a third party" (externality's definition)... Perhaps you mean intelligence leads to more externalities?

Furthermore, this study is definitely flawed since it's quite obvious that individual intelligence has done a great deal lot more good for society than bad. Is there even an argument about this?

Comment author: olalonde 25 April 2012 04:50:58PM *  3 points [-]

One way to get around the argument on semantics would be to replace "sound" by its definition.

...

Albert: "Hah! Definition 2c in Merriam-Webster: 'Sound: Mechanical radiant energy that is transmitted by longitudinal pressure waves in a material medium (as air).'"

Barry: "Hah! Definition 2b in Merriam-Webster: 'Sound: The sensation perceived by the sense of hearing.'"

Albert: "Since we cannot agree on the definition of sound and a third party might be confused if he listened to us, can you reformulate your question, replacing the word sound by its definition."

Barry: "OK. If a tree falls in the forest, and no one hears it, does it cause anyone to have the sensation perceived by the sense of hearing?"

Albert: "No."

Comment author: Vulture 25 April 2012 02:22:54AM 2 points [-]

Uh... I think the fact that humans aren't cognitively self-modifying (yet!) doesn't have to do with our intelligence level so much as the fact that we were not designed explicitly to be self-modifying, as the SIAI is assuming any AGI would be. I don't really know enough about AI to know whether or not this is strictly necessary for a decent AGI, but I get the impression that most (or all) serious would-be-AGI-builders are aiming for self-modification.

Comment author: olalonde 25 April 2012 11:21:47AM 0 points [-]

Isn't it implied that sub-human intelligence is not designed to be self-modifying given that monkeys don't know how to program? What exactly do you mean by "we were not designed explicitly to be self-modifying"?

Comment author: Bugmaster 25 April 2012 12:29:35AM 0 points [-]

The SIAI folks would say that your reasoning is exactly the kind of reasoning that leads to all of us being converted into computronium one day. More specifically, they would claim that, if you program an AI to improve itself recursively -- i.e., to rewrite its own code, and possibly rebuild its own hardware, in order to become smarter and smarter -- then its intelligence will grow exponentially, until it becomes smart enough to easily outsmart everyone on the planet. It would go from "monkey" to "quasi-godlike" very quickly, potentially so quickly that you won't even notice it happening.

FWIW, I personally am not convinced that this scenario is even possible, and I think that SIAI's worries are way overblown, but that's just my personal opinion.

Comment author: olalonde 25 April 2012 12:41:07AM *  0 points [-]

Human level intelligence is unable to improve itself at the moment (it's not even able to recreate itself if we exclude reproduction). I don't think monkey level intelligence will be more able to do so. I agree that the SIAI scenario is way overblown or at least until we have created an intelligence vastly superior to human one.

Comment author: Bugmaster 24 April 2012 11:41:31PM 2 points [-]

As far as I understand, a "Bayesian Rationalist" is someone who bases their beliefs (and thus decisions) on Bayesian probability, as opposed to ye olde frequentist probability. An X-rationalist is someone who embraces both epistemic and instrumental rationality (the Bayesian kind) in order to optimize every aspect of his life.

Comment author: olalonde 25 April 2012 12:23:50AM 0 points [-]

You mean explicitly base their every day life beliefs and decisions on Bayesian probability? That strikes me as highly impractical... Could you give some specific examples?

Comment author: Bugmaster 24 April 2012 11:36:07PM 0 points [-]

Most people here would probably tell you to immediately stop your work on AGI, until you can be reasonably sure that your AGI, once you build and activate it, would be safe. As far as I understand, the mission of SIAI (the people who host this site) is to prevent the rise of un-Friendly AGI, not to actually build one.

I could be wrong though, and I may be inadvertently caricaturing their position, so take my words with a grain of salt.

Comment author: olalonde 25 April 2012 12:12:41AM *  0 points [-]

I understand your concern, but at this point, we're not even near monkey level intelligence so when I get to 5 year old human level intelligence I think it'll be legitimate to start worrying. I don't think greater than human AI will happen all of a sudden.

Comment author: olalonde 24 April 2012 10:54:20PM *  5 points [-]

Hi all! I have been lurking LW for a few months (years?). I believe I was first introduced to LW through some posts on Hacker News (http://news.ycombinator.com/user?id=olalonde). I've always considered myself pretty good at rationality (is there a difference with being a rationalist?) and I've always been an atheist/reductionist. I recently (4 years ago?) converted to libertarianism (blame Milton Friedman). I was raised by 2 atheist doctors (as in PhD). I'm a software engineer and I'm mostly interested in the technical aspect of achieving AGI. Since I was a kid, I've always dreamed of seeing an AGI within my lifetime. I'd be curious to know if there are some people here working on actually building an AGI. I was born in Canada, have lived in Switzerland and am now living in China. I'm 23 years old IIRC. I believe I'm quite far from the stereotypical LWer on the personality side but I guess diversity doesn't hurt.

Nice to meet you all!

Comment author: olalonde 24 April 2012 11:14:44PM 1 point [-]

Before I get more involved here, could someone explain me what is

1) x-rationality (extreme rationality) 2) a rationalist 3) a bayesian rationalist

(I know what rationalism and Bayes theorem are but I'm not sure what the terms above refer to in the context of LW)

Comment author: olalonde 24 April 2012 10:54:20PM *  5 points [-]

Hi all! I have been lurking LW for a few months (years?). I believe I was first introduced to LW through some posts on Hacker News (http://news.ycombinator.com/user?id=olalonde). I've always considered myself pretty good at rationality (is there a difference with being a rationalist?) and I've always been an atheist/reductionist. I recently (4 years ago?) converted to libertarianism (blame Milton Friedman). I was raised by 2 atheist doctors (as in PhD). I'm a software engineer and I'm mostly interested in the technical aspect of achieving AGI. Since I was a kid, I've always dreamed of seeing an AGI within my lifetime. I'd be curious to know if there are some people here working on actually building an AGI. I was born in Canada, have lived in Switzerland and am now living in China. I'm 23 years old IIRC. I believe I'm quite far from the stereotypical LWer on the personality side but I guess diversity doesn't hurt.

Nice to meet you all!

View more: Prev