Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

olalonde comments on Welcome to Less Wrong! (2012) - Less Wrong

25 Post author: orthonormal 26 December 2011 10:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1430)

You are viewing a single comment's thread.

Comment author: olalonde 24 April 2012 10:54:20PM *  5 points [-]

Hi all! I have been lurking LW for a few months (years?). I believe I was first introduced to LW through some posts on Hacker News (http://news.ycombinator.com/user?id=olalonde). I've always considered myself pretty good at rationality (is there a difference with being a rationalist?) and I've always been an atheist/reductionist. I recently (4 years ago?) converted to libertarianism (blame Milton Friedman). I was raised by 2 atheist doctors (as in PhD). I'm a software engineer and I'm mostly interested in the technical aspect of achieving AGI. Since I was a kid, I've always dreamed of seeing an AGI within my lifetime. I'd be curious to know if there are some people here working on actually building an AGI. I was born in Canada, have lived in Switzerland and am now living in China. I'm 23 years old IIRC. I believe I'm quite far from the stereotypical LWer on the personality side but I guess diversity doesn't hurt.

Nice to meet you all!

Comment author: olalonde 24 April 2012 11:14:44PM 1 point [-]

Before I get more involved here, could someone explain me what is

1) x-rationality (extreme rationality) 2) a rationalist 3) a bayesian rationalist

(I know what rationalism and Bayes theorem are but I'm not sure what the terms above refer to in the context of LW)

Comment author: Nornagest 24 April 2012 11:37:38PM *  4 points [-]

In the context of LW, all those terms are pretty closely related unless some more specific context makes it clear that they're not. X-rationality is a term coined to distinguish the LW methodology (which is too complicated to describe in a paragraph, but the tagline on the front page does a decent job) from rationality in the colloquial sense, which is a much fuzzier set of concepts; when someone talks about "rationality" here, though, they usually mean the former and not the latter. This is the post where the term originates, I believe.

A "rationalist" as commonly used in LW is one who pursues (and ideally attempts to improve on) some approximation of LW methodology. "Aspiring rationalist" seems to be the preferred term among some segments of the userbase, but it hasn't achieved fixation yet. Personally, I try to avoid both.

A "Bayesian rationalist" is simply a LW-style rationalist as defined above, but the qualification usually indicates that some contrast is intended. A contrast with rationalism in the philosophical sense is probably the most likely; that's quite different and in some ways mutually exclusive with LW epistemology, which is generally closer to philosophical empiricism.

Comment author: Bugmaster 24 April 2012 11:44:39PM 0 points [-]

"Aspiring rationalist" seems to be the preferred term among some segments of the userbase...

AFAIK there's actually a user by that name, so I'd avoid the term just to minimize confusion.

Comment author: Bugmaster 24 April 2012 11:41:31PM 2 points [-]

As far as I understand, a "Bayesian Rationalist" is someone who bases their beliefs (and thus decisions) on Bayesian probability, as opposed to ye olde frequentist probability. An X-rationalist is someone who embraces both epistemic and instrumental rationality (the Bayesian kind) in order to optimize every aspect of his life.

Comment author: olalonde 25 April 2012 12:23:50AM 0 points [-]

You mean explicitly base their every day life beliefs and decisions on Bayesian probability? That strikes me as highly impractical... Could you give some specific examples?

Comment author: Nornagest 25 April 2012 01:45:20AM *  2 points [-]

As best I can tell it is impractical as an actual decision-making procedure for more complex cases, at least assuming well-formalized priors. As a limit to be asymptotically approached it seems sound, though -- and that's probably the best we can do on our hardware anyway.

Comment author: Bugmaster 25 April 2012 12:35:05AM *  0 points [-]

I thought I could, but Yvain kind of took the wind out of my sails with his post that Nornagest linked to, above. That said, Eliezer does outline his vision of using Bayesian rationality in daily life here, and in that whole sequence of posts in general.

Comment author: Bugmaster 24 April 2012 11:36:07PM 0 points [-]

Most people here would probably tell you to immediately stop your work on AGI, until you can be reasonably sure that your AGI, once you build and activate it, would be safe. As far as I understand, the mission of SIAI (the people who host this site) is to prevent the rise of un-Friendly AGI, not to actually build one.

I could be wrong though, and I may be inadvertently caricaturing their position, so take my words with a grain of salt.

Comment author: wedrifid 25 April 2012 12:44:13AM 2 points [-]

As far as I understand, the mission of SIAI (the people who host this site) is to prevent the rise of un-Friendly AGI, not to actually build one.

I think they are kind of keen on the idea of not dying too. Improving the chances that a Friendly AI will be created by someone is probably up there as a goal too.

Comment author: Bugmaster 25 April 2012 12:52:25AM 2 points [-]

I think they are kind of keen on the idea of not dying too.

Imagine that ! :-)

Improving the chances that a Friendly AI will be created by someone is probably up there as a goal too.

That's a different goal, though. As far as I understand, olalonde's master plan looks something like this:

1). Figure out how to build AGI.
2). Build a reasonably smart one as a proof of concept.
3). Figure out where to go from there, and how to make AGI safe.
4). Eventually, build a transhuman AGI once we know it's safe.

Whereas the SIAI master plan looks something like this:

1). Make sure that an un-Friendly AGI does not get built.
2). Figure out how to build a Friendly AGI.
3). Build one.
4). Now that we know it's safe, build a transhuman AGI (or simply wait long enough, since the AGI from step (3) will boost itself to transhuman levels).

One key difference between olalonde's plan and SIAI's plan is the assumption SIAI is making: they are assuming that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels. Thus, from their perspective, olalonde's step (2) above might as well say, "build a machine that's guaranteed to eat us all", which would clearly be a bad thing.

Comment author: wedrifid 25 April 2012 01:25:19AM 2 points [-]

One key difference between olalonde's plan and SIAI's plan is the assumption SIAI is making: they are assuming that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels. Thus, from their perspective, olalonde's step (2) above might as well say, "build a machine that's guaranteed to eat us all", which would clearly be a bad thing.

A good summary. I'd slightly modify it in as much as they would allow the possibility that a really weak AGI may not do much in the way of FOOMing but they pretty much ignore those ones and expect they would just be a stepping stone for the developers who would go on to make better ones. (This is just my reasoning but I assume they would think similarly.)

Comment author: Bugmaster 25 April 2012 01:32:17AM 0 points [-]

Good point. Though I guess we could still say that the weak AI is recursively self-improving in this scenario -- it's just using the developers' brains as its platform, as opposed to digital hardware. I don't know whether the SIAI folks would endorse this view, though.

Comment author: wedrifid 25 April 2012 01:52:51AM 3 points [-]

Good point. Though I guess we could still say that the weak AI is recursively self-improving in this scenario -- it's just using the developers' brains as its platform, as opposed to digital hardware.

Can't we limit the meaning of "self-improving" to at least stuff that the AI actually does? We can already say more precisely that the AI is being iteratively improved by the creators. We don't have to go around removing the distinction between what an agent does and what the creator of the agent happens to do to it.

Comment author: Bugmaster 25 April 2012 01:54:43AM 1 point [-]

Yeah, I am totally onboard with this suggestion.

Comment author: wedrifid 25 April 2012 01:57:34AM 1 point [-]

Great. I hope I wasn't being too pedantic there. I wasn't trying to find technical fault with anything essential to your position.

Comment author: TheOtherDave 25 April 2012 01:09:34AM 2 points [-]

[SIAI] are assuming that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels.

Can you clarify your reasons for believing this, as distinct from "...any AGI has a non-negligible chance of self-improving itself to transhuman levels, and the cost of that happening is so vast that it's worth devoting effort to avoid even if the chance is relatively low"?

Comment author: Bugmaster 25 April 2012 01:18:08AM 1 point [-]

That's a good point, but, from reading what Eliezer and Luke are writing, I formed the impression that my interpretation is correct. In addition, the SIAI FAQ seems to be saying that intelligence explosion is a natural consequence of Moore's Law; thus, if Moore's Law continues to hold, intelligence explosion is inevitable.

FWIW, I personally disagree with both statements, but that's probably a separate topic.

Comment author: TheOtherDave 25 April 2012 03:31:38AM 0 points [-]

Huh. The FAQ you cite doesn't seem to be positing inevitability to me. (shrug)

Comment author: Bugmaster 25 April 2012 08:31:00PM 0 points [-]

You're right, I just re-read it and it doesn't mention Moore's Law; either it did at some point and then changed, or I saw that argument somewhere else. Still, the FAQ does seem to suggest that the only thing that can stop the Singularity is total human extinction (well, that, or the existence of souls, which IMO we can safely discount); that's pretty close to inevitability as far as I'm concerned.

Comment author: TheOtherDave 25 April 2012 09:01:42PM 0 points [-]

Note that the section you're quoting is no longer talking about the inevitable ascension of any given AGI, but rather the inevitability of some AGI ascending.

Comment author: Bugmaster 26 April 2012 08:36:34PM 0 points [-]

I thought they were talking specifically about an AGI that is capable of recursive self-improvement. This does not encompass all possible AGIs, but the non-self-improving ones are not likely to be very smart, as far as I understand, and thus aren't a concern.

Comment author: olalonde 25 April 2012 12:12:41AM *  0 points [-]

I understand your concern, but at this point, we're not even near monkey level intelligence so when I get to 5 year old human level intelligence I think it'll be legitimate to start worrying. I don't think greater than human AI will happen all of a sudden.

Comment author: Bugmaster 25 April 2012 12:29:35AM 0 points [-]

The SIAI folks would say that your reasoning is exactly the kind of reasoning that leads to all of us being converted into computronium one day. More specifically, they would claim that, if you program an AI to improve itself recursively -- i.e., to rewrite its own code, and possibly rebuild its own hardware, in order to become smarter and smarter -- then its intelligence will grow exponentially, until it becomes smart enough to easily outsmart everyone on the planet. It would go from "monkey" to "quasi-godlike" very quickly, potentially so quickly that you won't even notice it happening.

FWIW, I personally am not convinced that this scenario is even possible, and I think that SIAI's worries are way overblown, but that's just my personal opinion.

Comment author: wedrifid 25 April 2012 12:37:09AM 2 points [-]

i.e., to rewrite its own code, and possibly rebuild its own hardware, in order to become smarter and smarter -- then its intelligence will grow exponentially, until it becomes smart enough to easily outsmart everyone on the planet.

Recursively, not necessarily exponentially. It may exploit the low hanging fruit early and improve somewhat slower once those are gone. Same conclusion applies - the threat is that it improves rapidly, not that it improves exponentially.

Comment author: Bugmaster 25 April 2012 12:42:48AM 0 points [-]

Good point, though if the AI's intelligence grew linearly or as O(log T) or something, I doubt that it would be able to achieve the kind of speed that we'd need to worry about. But you're right, the speed is what ultimately matters, not the growth curve as such.

Comment author: olalonde 25 April 2012 12:41:07AM *  0 points [-]

Human level intelligence is unable to improve itself at the moment (it's not even able to recreate itself if we exclude reproduction). I don't think monkey level intelligence will be more able to do so. I agree that the SIAI scenario is way overblown or at least until we have created an intelligence vastly superior to human one.

Comment author: Vulture 25 April 2012 02:22:54AM 2 points [-]

Uh... I think the fact that humans aren't cognitively self-modifying (yet!) doesn't have to do with our intelligence level so much as the fact that we were not designed explicitly to be self-modifying, as the SIAI is assuming any AGI would be. I don't really know enough about AI to know whether or not this is strictly necessary for a decent AGI, but I get the impression that most (or all) serious would-be-AGI-builders are aiming for self-modification.

Comment author: olalonde 25 April 2012 11:21:47AM 0 points [-]

Isn't it implied that sub-human intelligence is not designed to be self-modifying given that monkeys don't know how to program? What exactly do you mean by "we were not designed explicitly to be self-modifying"?

Comment author: Vulture 26 April 2012 01:09:18AM 0 points [-]

My understanding was that in your comment you basically said that our current inability to modify ourselves is evidence that an AGI of human-level intelligence would likewise be unable to self-modify.

Comment author: adamisom 25 April 2012 03:13:22AM 0 points [-]

This is a really stupid question, but I don't grok the distinction between 'learning' and 'self-modification' - do you get it?

Comment author: Vulture 25 April 2012 04:16:56AM 2 points [-]

By my understanding, learning is basically when a program collects the data it uses itself through interaction with some external system. Self-modification, on the other hand, is when the program has direct read/write acces to its own source code, so it can modify its own decision-making algorithm directly, not just the data set its algorithm uses.

Comment author: TheOtherDave 25 April 2012 04:48:00AM 1 point [-]

This seems to presume a crisp distinction between code and data, yes?
That distinction is not always so crisp. Code fragments can serve as data, for example.
But, sure, it's reasonable to say a system is learning but not self-modifying if the system does preserve such a crisp distinction and its code hasn't changed.