AGI/FAI Theorist for Hire

12 Peter_de_Blanc 15 July 2011 03:50PM

I'm nearing the end of my employment at SIAI and looking for my next gig. If all else fails I will likely move back to the Bay area (I am currently in Japan) and take a job as a programmer somewhere. However, I would prefer to focus my attention directly on developing AGI and FAI theory. In addition to my current projects (described below), I can try to answer mathematical, philosophical, or other questions for a bit of cash. For some of my previous work, see my page on the arXiv.

In the past year I've been involved in two major projects at SIAI. Steve Rayhawk and I were asked to review existing AGI literature and produce estimates of development timelines for AGI. My work on this project got rather bogged down and proceeded slowly, although I did learn a lot and I've moved in the direction of predicting AGI soonish (5-20 years). After this I tried to produce an AGI technology demo for Google's AGI-11 conference. I was unable to finish my demo in time for the submission deadline, and shortly afterwards SIAI decided to let me go.

I have several projects that I would like to move forward with, and if I can get adequate funds (about $1000 per month to ensure my survival, or $2000 to live comfortably) I will be able to work on them.

Current project ideas:

  1. Continue development on my incomplete AGI project (optimally, technical details not to be published).
  2. Write a paper on AGI models that can be used as a basis for FAI research (similar to the way AIXI and its ilk are used now, but closer to reality than AIXI).
  3. Figure out how an AI can reason formally about using objects in its environment as tools for performing computations.
  4. I'm also interested in repurposing machine learning algorithms used for finding plausible hypotheses about data distributions into algorithms for finding action policies with high expected utility.

I'm open to suggestions for other topics. I don't consider myself an expert at empiricism, so I prefer to work in domains where I can reason formally. Some thing I'd be up for:

  1. If you have informal questions or concerns, I can try to think of formal mathematical questions that are similar.
  2. Once we're dealing with a mathematical question, I can try to answer it.
  3. If a question looks too hard for me to answer (as will often be the case), I can try to figure out exactly what is hard about it.
  4. I'm also interested in writing problem sets. If you want to learn about some weird domain that no textbook exists for, I'll try to figure out what some introductory problems in that domain would look like.

Prices for any of these services are negotiable. You can contact me here or at peter@spaceandgames.com.

Comment author: Wei_Dai 09 June 2011 01:56:15AM 2 points [-]

I wrote earlier:

Where I've seen people use PDUs in AI or philosophy, they weren't confused, but rather chose to make the assumption of perception-determined utility functions (or even more restrictive assumptions) in order to prove some theorems.

Well, here's a recent SIAI paper that uses perception-determined utility functions, but apparently not in order to prove theorems (since the paper contains no theorems). The author was advised by Peter de Blanc, who two years ago wrote the OP arguing against PDUs. Which makes me confused: does the author (Daniel Dewey) really think that PDUs are a good idea, and does Peter now agree?

Comment author: Peter_de_Blanc 11 June 2011 01:34:05PM 0 points [-]

I don't think that human values are well described by a PDU. I remember Daniel talking about a hidden reward tape at one point, but I guess that didn't make it into this paper.

Comment author: Stuart_Armstrong 10 June 2011 12:38:36PM 1 point [-]

"I am a god" is to simplistic. I can model it better as a probability, that varies with N, that you are able to move the universe to UN(N). This tracks how good a god you are, and seems to make the paradox disappear.

Comment author: Peter_de_Blanc 11 June 2011 05:31:22AM 0 points [-]

This tracks how good a god you are, and seems to make the paradox disappear.

How? Are you assuming that P(N) goes to zero?

Comment author: orthonormal 10 June 2011 07:33:49AM 0 points [-]

LCPW isn't even necessary: do you really think that it wouldn't make a difference that you'd care about?

Comment author: Peter_de_Blanc 10 June 2011 08:27:20AM 0 points [-]

LCPW cuts two ways here, because there are two universal quantifiers in your claim. You need to look at every possible bounded utility function, not just every possible scenario. At least, if I understand you correctly, you're claiming that no bounded utility function reflects your preferences accurately.

Comment author: drethelin 08 June 2011 05:16:16PM 1 point [-]

resources, whether physical or computational. Presumably the AI is programmed to utilize resources in a parsimonious manner, with terms governing various applications of the resources, including powering the AI, and deciding on what to do. If the AI is programmed to limit what it does at some large but arbitrary point, because we don't want it taking over the universe or whatever, then this point might end up actually being before we want it to stop doing whatever it's doing.

Comment author: Peter_de_Blanc 09 June 2011 08:09:59AM 0 points [-]

That doesn't sound like an expected utility maximizer.

Comment author: orthonormal 08 June 2011 11:02:52PM 1 point [-]

A small risk of losing the utility it was previously counting on.

Of course you can do intuition pumps either way- I don't feel like I'd want the AI to sacrifice everything in the universe we know for a 0.01% chance of making it in a bigger universe- but some level of risk has to be worth a vast increase in potential fun.

Comment author: Peter_de_Blanc 09 June 2011 08:09:08AM 1 point [-]

It seems to me that expanding further would reduce the risk of losing the utility it was previously counting on.

Comment author: orthonormal 08 June 2011 07:42:52AM 1 point [-]

Upvoted because the objection makes me uncomfortable, and because none of the replies satisfy my mathematical/aesthetic intuition.

However, requiring utilities to be bounded also strikes me as mathematically ugly and practically dangerous– what if the universe turns out to be much larger than previously thought, and the AI says "I'm at 99.999% of achievable utility already, it's not worth it to expand farther or live longer"?

Thus I view this as a currently unsolved problem in decision theory, and a better intuition-pump version than Pascal's Mugging. Thanks for posting.

Comment author: Peter_de_Blanc 08 June 2011 08:41:20AM 5 points [-]

what if the universe turns out to be much larger than previously thought, and the AI says "I'm at 99.999% of achievable utility already, it's not worth it to expand farther or live longer"?

It's not worth what?

Comment author: [deleted] 26 May 2011 06:52:21PM 7 points [-]
  1. A physically plausible scenario would involve growing up under a monochromatic light source.

  2. Growing up without sensory input actually affects the brain; see Wikipedia's article on monocular deprivation. I'm actually an example of this - I was born without the Mystic Eyes Of Depth Perception so I'll never know what stereoscopic vision "feels like".

  3. I propose that "qualia" is a word that, like "microevolution", is mainly used by people who are very confused (and dissolving the question is the appropriate approach).

Comment author: Peter_de_Blanc 28 May 2011 01:06:46PM 0 points [-]

Depth perception can be gained through vision therapy, even if you've never had it before. This is something I'm looking into doing, since I also grew up without depth perception.

Comment author: gjm 28 May 2011 12:57:40AM 8 points [-]

"Rational people can't agree to disagree" is an oversimplification. Rational people can perfectly well reach a conclusion of the form: "Our disagreement on this matter is a consequence of our disagreement on other issues that would be very difficult to resolve, and for which there are many apparently intelligent, honest and well informed people on both sides. Therefore, it seems likely that reaching agreement on this issue would take an awful lot of work and wouldn't be much more likely to leave us both right than to leave us both wrong. We choose, instead, to leave the matter unresolved until either it matters more or we see better prospects of resolving it."

Imperfectly rational people who are aware of their imperfect rationality (note: this is in fact the nearest any of us actually come to being rational people) might also reasonably reach a conclusion of this form: "Perhaps clear enough thinking on both sides would suffice to let us resolve this. However, it's apparent that at least one of us is currently sufficiently irrational about it that trying to reach agreement poses a real danger of spoiling the good relations we currently enjoy, and while clearly that irrationality is a bad thing it doesn't seem likely that trying to resolve our current disagreement now is the best way to address it, so let's leave it for now."

I suspect (with no actual evidence) that when two reasonably-rational people say they're agreeing to disagree, what they mean is often approximately one of the above or a combination thereof, and that they're often wise to "agree to disagree". The fact that there are theorems saying that two perfect rationalists who care about nothing more than getting the right answer to the question they're currently disputing won't "agree to disagree" seems to me to have little bearing on this.

Eliezer, if you're reading this: You may remember that a while back on OB you and Robin Hanson discussed the prospects of rapidly improving artificial intelligence in the nearish future. By no means did you resolve your differences in that discussion. Would it be fair to characterize the way it ended as "agreeing to disagree"? From the outside, it sure looks like that's what it amounted to, whatever you may or may not have said to one another about it. Perhaps you and/or Robin might say "Yeah, but the other guy isn't really rational about this". Could be, but if the level of joint rationality required for "can't agree to disagree" is higher than that of {Eliezer,Robin} then it's not clear how widely applicable the principle "rational people can't agree to disagree" really is. (Note for the avoidance of doubt: The foregoing is not intended to imply that Eliezer and Robin are equally rational; I do not intend to make any further comment on my opinions, if any, on that matter.)

Comment author: Peter_de_Blanc 28 May 2011 04:52:50AM 2 points [-]

Our disagreement on this matter is a consequence of our disagreement on other issues that would be very difficult to resolve, and for which there are many apparently intelligent, honest and well informed people on both sides. Therefore, it seems likely that reaching agreement on this issue would take an awful lot of work and wouldn't be much more likely to leave us both right than to leave us both wrong.

You say that as if resolving a disagreement means agreeing to both choose one side or the other. The most common result of cheaply resolving a disagreement is not "both right" or "both wrong", but "both -3 decibels."

Comment author: ata 16 May 2011 08:51:14AM *  0 points [-]

Obviously I didn't mean that being broke (or anything) is infinite disutility. Am I mistaken that the utility of money is otherwise modeled as logarithmic generally?

In response to comment by ata on Circular Altruism
Comment author: Peter_de_Blanc 16 May 2011 10:25:21AM 1 point [-]

Obviously I didn't mean that being broke (or anything) is infinite disutility.

Then what asymptote were you referring to?

View more: Prev | Next