Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Call for new SIAI Visiting Fellows, on a rolling basis

29 Post author: AnnaSalamon 01 December 2009 01:42AM

Last summer, 15 Less Wrongers, under the auspices of SIAI, gathered in a big house in Santa Clara (in the SF bay area), with whiteboards, existential risk-reducing projects, and the ambition to learn and do.

Now, the new and better version has arrived.  We’re taking folks on a rolling basis to come join in our projects, learn and strategize with us, and consider long term life paths.  Working with this crowd transformed my world; it felt like I was learning to think.  I wouldn’t be surprised if it can transform yours.

A representative sample of current projects:

  • Research and writing on decision theory, anthropic inference, and other non-dangerous aspects of the foundations of AI;
  • The Peter Platzer Popular Book Planning Project;
  • Editing and publicizing theuncertainfuture.com;
  • Improving the LW wiki, and/or writing good LW posts;
  • Getting good popular writing and videos on the web, of sorts that improve AI risks understanding for key groups;
  • Writing academic conference/journal papers to seed academic literatures on questions around AI risks (e.g., takeoff speed, economics of AI software engineering, genie problems, what kinds of goal systems can easily arise and what portion of such goal systems would be foreign to human values; theoretical compsci knowledge would be helpful for many of these questions).

Interested, but not sure whether to apply?

Past experience indicates that more than one brilliant, capable person refrained from contacting SIAI, because they weren’t sure they were “good enough”.  That kind of timidity destroys the world, by failing to save it.  So if that’s your situation, send us an email.  Let us be the one to say “no”.  Glancing at an extra application is cheap, and losing out on a capable applicant is expensive.

And if you’re seriously interested in risk reduction but at a later time, or in another capacity -- send us an email anyway.  Coordinated groups accomplish more than uncoordinated groups; and if you care about risk reduction, we want to know.

What we’re looking for

At bottom, we’re looking for anyone who:

  • Is capable (strong ability to get things done);
  • Seriously aspires to rationality; and
  • Is passionate about reducing existential risk.

Bonus points for any (you don’t need them all) of the following traits:

  • Experience with management, for example in a position of responsibility in a large organization;
  • Good interpersonal and social skills;
  • Extraversion, or interest in other people, and in forming strong communities;
  • Dazzling brilliance at math or philosophy;
  • A history of successful academic paper-writing; strategic understanding of journal submission processes, grant application processes, etc.
  • Strong general knowledge of science or social science, and the ability to read rapidly and/or to quickly pick up new fields;
  • Great writing skills and/or marketing skills;
  • Organization, strong ability to keep projects going without much supervision, and the ability to get mundane stuff done in a reliable manner;
  • Skill at implementing (non-AI) software projects, such as web apps for interactive technological forecasting, rapidly and reliably;
  • Web programming skill, or website design skill;
  • Legal background;
  • A history of successfully pulling off large projects or events;
  • Unusual competence of some other sort, in some domain we need, but haven’t realized we need.
  • Cognitive diversity: any respect in which you're different from the typical LW-er, and in which you're more likely than average to notice something we're missing.

If you think this might be you, send a quick email to jasen@intelligence.org.  Include:

  1. Why you’re interested;
  2. What particular skills you would bring, and what evidence makes you think you have those skills (you might include a standard resume or c.v.);
  3. Optionally, any ideas you have for what sorts of projects you might like to be involved in, or how your skillset could help us improve humanity’s long-term odds.

Our application process is fairly informal, so send us a quick email as initial inquiry and we can decide whether or not to follow up with more application components.

As to logistics: we cover room, board, and, if you need it, airfare, but no other stipend.

Looking forward to hearing from you,
Anna

ETA (as of 3/25/10):  We are still accepting applications, for summer and in general.  Also, you may wish to check out http://www.singinst.org/grants/challenge#grantproposals for a list of some current projects.

 

Comments (264)

Comment author: Yorick_Newsome 01 December 2009 06:39:15AM *  19 points [-]

I'm slowly waking up to the fact that people at the Singularity Institute as well as Less Wrong are dealing with existential risk as a Real Problem, not just a theoretical idea to play with in an academic way. I've read many essays and watched many videos, but the seriousness just never really hit my brain. For some reason I had never realized that people were actually working on these problems.

I'm an 18 year old recent high school dropout, about to nab my GED. I could go to community college, or I could go along with my plan of leading a simple life working a simple job, which I would be content doing. I'm a sort of tabla rossa here: if I wanted to get into the position where I would be of use to the SIAI, what skills should I develop? Which of the 'What we're looking for' traits would be most useful in a few years? (The only thing I'm good at right now is reading very quickly and retaining large amounts of information about various fields: but I rarely understand the math, which is currently very limiting.)

Comment author: AnnaSalamon 01 December 2009 09:10:49AM *  22 points [-]

Yorick, and anyone else who is serious about reducing existential risk and is not in our contact network: please email me. anna at singinst dot org. The reason you should email is that empirically, people seem to make much better decisions about what paths will reduce existential risks when in dialog with others. Improved information here can go a long way.

I'll answer anyway, for the benefit of lurkers (but Yorick, don't believe my overall advice. Email me instead, about your specific strengths and situation):

  1. Work on rationality. To help existential risk at all, you need: (a) unusual ability to weigh evidence fairly, in confusing instances and despite the presence of strong emotions; (b) the ability to take far-more evidence seriously on an emotional and action-based level. (But (b) is only an asset after you have formed careful, robust, evidence-based conclusions. If you're as bad a thinker as 95% of the population, acting on far-mode conclusions can be dangerous, and can make your actions worse.)
  2. Learn one of: math, physics, programming, or possibly analytic philosophy, because they teach useful habits of thought. Programming is perhaps the most useful of these because it can additionally be used to make money.
  3. Learn people skills. Tutoring skills; sales skills; the ability to start and maintain positive conversations with strangers; management skills and experience; social status non-verbals (which one can learn in the pickup community, among other places); observational skills and the ability to understand and make accurate predictions about the people around you; skill at making friends; skill at building effective teams...
  4. Learn to track details, to direct your efforts well within complex projects, and to reliably get things done. Exercise regularly, too.
Comment author: Kaj_Sotala 01 December 2009 04:02:28PM *  8 points [-]

Note that it's also good to have some preliminary discussion here, moving on to e-mail mainly if personal details come up that one feels unwilling to share in public. If a lot of people publicly post their interest to participate, then that will encourage others to apply as well. Plus it gives people a picture of what sort of other folks they might end up working with. Also, discussing the details of the issue in public will help those who might initially be too shy to send a private e-mail, as they can just read what's been discussed before. Even if you weren't shy as such, others might raise questions you didn't happen to think of. For instance, I think Anna's four points above are good advice for a lot of people, and I'm happy that Yorick posted the comment that prompted this response and didn't just e-mail Anna directly.

(EDIT: Removed a few paragraphs as I realized I'd have to rethink their content.)

Comment author: Morendil 01 December 2009 04:30:01PM 3 points [-]

I don't feel like having this discussion in public, but Anna's invitation is framed in broad enough terms that I'll be getting in touch.

Comment author: Kevin 07 March 2010 09:18:52AM 4 points [-]

Where are the non pickup community places to learn social status non-verbals?

Comment author: Morendil 02 December 2009 07:20:45PM 2 points [-]

I've sent an email your way. Given that email has become a slightly unreliable medium, thanks to the arms race between Spam and Bayesian (and other) countermeasures, I'd appreciate an acknowledge (even if just to say "got it"), here or via email.

Comment author: AnnaSalamon 02 December 2009 09:35:32PM 5 points [-]

Thanks for the heads up. Oddly enough, it was sitting in the spam filter on my SIAI account (without making it through forwarding to my gmail account, where I was checking the spam filter). Yours was the only message caught in the SIAI spam filter, out of 19 who emailed so far in response to this post.

Did you have special reason to expect to be caught in a spam filter?

Comment author: Morendil 02 December 2009 10:08:26PM 3 points [-]

It happens every so often to email people send me, so I periodically check the spam folder on Gmail; by symmetry I assume it happens to email I send. It's more likely to occur on a first contact, too. And last, I spent a fair bit of time composing that email, getting over the diffidence you're accurately assuming.

Comment author: MichaelBishop 03 December 2009 01:29:27AM *  5 points [-]

your handle sounds like a brand name drug ;) e.g. paxil

Comment author: [deleted] 05 December 2009 07:54:28AM 0 points [-]

I wonder how long I can expect to wait before receiving a response. I sent my email on Wednesday, by the way.

Comment author: SilasBarta 07 December 2009 04:02:46PM 1 point [-]

So you want to know f(x) := P(will receive a response|have not received a response in x days) for values of x from 0 to say, 7?

Comment author: AnnaSalamon 07 December 2009 06:16:08AM 1 point [-]

I'm sorry; I still haven't responded to many of them. Somewhere in the 1-3 days range for an initial response, probably.

Comment author: Morendil 05 December 2009 09:11:19AM 0 points [-]

I suggest you reply to the parent (Anna's) comment, that will show up in her inbox.

Comment author: arbimote 15 January 2010 10:24:03AM 0 points [-]

I sent an email on January the 10th, and haven't yet got a reply. Has my email made it to you? Granted, it is over a month since this article was posted, so I understand if you are working on things other than applications at this point...

Comment author: Henrik_Jonsson 01 December 2009 02:12:21AM 17 points [-]

I took part of the 2009 summer program during the vacation of my day job as a software developer in Sweden. This entailed spending five weeks with the smartest and most dedicated people I have ever met, working on a wide array of projects both short- and long-term, some of which were finished by the time I left and some of which are still on-going.

My biggest worry beforehand was that I would not be anywhere near talented enough to participate and contribute in the company of SIAI employees and supporters. That seems to not have occurred, though I don't claim to have anywhere near the talent of most others involved. Some of the things I was involved with during the summer was work on the Singularity Summit website as well continuing the Uncertain Future project for assigning probability distributions to events and having the conclusions calculated for you. I also worked on papers with Carl Shulman and Nick Tarleton, read a massive amount of papers and books, took trips to San Fransisco and elsewhere, played games, discussed weird forms of decision theories and counter-factual everything, etc, etc.

My own comparative advantages seem to be having the focus to keep hacking away at projects, as well as the specialized skills that came from having a CS background and some experience (less than a year though) of working in the software industry. I'm currently writing this from the SIAI house, to which I returned about three weeks ago. This time I mainly focused on getting a job as a software developer in the Bay area (I seem to have succeeded), for the aims of earning money (some of which will go to donations) and also making it easier for me to participate in SIAI projects.

I'd say that the most important factor for people considering applying should be if they have strong motivations and a high level of interest in the issues that SIAI involves itself with. Agreeing with specific perceived beliefs of the SIAI or people involved with it is not necessary, and the disagreements will be brought out and discussed as thoroughly as you could ever wish for. As long as the interest and motivation is there, the specific projects you want to work with should work itself out nicely. My own biggest regret is that I kept lurking for so long before getting in touch with the people here.

Comment author: komponisto 01 December 2009 09:27:02PM *  13 points [-]

Past experience indicates that more than one brilliant, capable person refrained from contacting SIAI, because they weren’t sure they were “good enough”.

Well, who can blame them?

Seriously, FYI (where perhaps the Y stands for "Yudkowsky's"): that document (or a similar one) really rubbed me the wrong way the first time I read it. It just smacked of "only the cool kids can play with us". I realize that's probably because I don't run into very many people who think they can easily solve FAI, whereas Eliezer runs into them constantly; but still.

Comment author: Eliezer_Yudkowsky 01 December 2009 10:54:43PM 6 points [-]

That's if you want to be an FAI developer and on the Final Programming Team of the End of the World, not if you want to work for SIAI in any capacity whatsoever. If you're writing to myself, rather than Anna, then yes, mentioning e.g. the International Math Olympiad will help to get my attention. (Though I'm certain the document does need updating - I haven't looked at it myself in a long while.)

Comment author: Kaj_Sotala 01 December 2009 11:32:30PM 7 points [-]

It does kinda give the impression that a) donors and b) programmers are all that SIAI has a use for, though. It mentions that if you want to help but aren't a genius, sure, you can be a donor, or you can see if you get into a limited number of slots for non-genius programmers, but that's it.

I'm also one of the people who's been discouraged from the thought of being useful for SIAI by that document, though. (Fortunately people have afterwards been giving the impression I might be of some use after all. Submitted an application today.)

Comment author: Eliezer_Yudkowsky 02 December 2009 04:25:41AM 6 points [-]

Anna, and in general the Vassarian lineage, are more effective cooperators than I am. The people who I have the ability to cooperate with, form a much more restricted set than those who they can cooperate with.

Comment author: Nick_Tarleton 02 December 2009 04:12:36AM 2 points [-]

It does kinda give the impression that a) donors and b) programmers are all that SIAI has a use for, though.

I once had that impression too, almost certainly in part from SYWTBASAIP.

Comment author: DanArmak 01 December 2009 09:55:11PM *  11 points [-]

It rubbed me the wrong way when, after explaining for several pages that successful FAI Programmers would have to be so good that the very best programmers on the planet may not be good enough, it added - "We will probably, but not definitely, end up working in Java".

...I don't know if that's a bad joke or a hint that the writer isn't being serious. Well, if it's a joke, it's bad and not funny. Now I'll have nightmares of the best programmers Planet Earth could field failing to write a FAI because they used Java of all things.

Comment author: komponisto 01 December 2009 10:14:49PM 3 points [-]

It rubbed me the wrong way when, after explaining for several pages that successful FAI Programmers would have to be so good that the very best programmers on the planet may not be good enough, it added - "We will probably, but not definitely, end up working in Java"

I had the same thought -- how incongruous! (Not that I'm necessarily particularly qualified to critique the choice, but it just sounded...inappropriate. Like describing a project to build a time machine and then solemnly announcing that the supplies would be purchased at Target.)

I assume, needless to say, that (at least) that part is no longer representative of Eliezer's current thinking.

Comment author: DanArmak 01 December 2009 10:25:32PM 3 points [-]

I can't understand how it could ever have been part of his thinking. (Java was even worse years ago!)

Comment author: Jordan 01 December 2009 10:12:18PM 2 points [-]

It's mentioned twice, so I doubt it's a joke.

Comment author: Liron 01 December 2009 10:42:14PM 2 points [-]

This was written circa 2002 when Java was at least worthy of consideration compared to the other options out there.

Comment author: Eliezer_Yudkowsky 01 December 2009 10:50:23PM 8 points [-]

Yup. The logic at the time went something like, "I want something that will be reasonably fast and scale to lots of multiple processors and runs in a tight sandbox and has been thoroughly debugged with enterprise-scale muscle behind it, and which above all is not C++, and in a few years (note: HAH!) when we start coding, Java will probably be it." There were lots of better-designed languages out there but they didn't have the promise of enterprise-scale muscle behind their implementation of things like parallelism.

Also at that time, I was thinking in terms of a much larger eventual codebase, and was much more desperate to use something that wasn't C++. Today I would say that if you can write AI at all, you can write the code parts in C, because AI is not a coding problem.

Mostly in that era there weren't any good choices, so far as I knew then. Ben Goertzel, who was trying to scale a large AI codebase, was working in a mix of C/C++ and a custom language running on top of C/C++ (I forget which), which I think he had transitioned either out of Java or something else, because nothing else was fast enough or handled parallelism correctly. Lisp, he said at that time, would have been way too slow.

Comment author: kpreid 01 December 2009 11:15:35PM 6 points [-]

Today I would say that if you can write AI at all, you can write the code parts in C, because AI is not a coding problem.

I'd rather the AI have a very low probability of overwriting its supergoal by way of a buffer overflow.

Comment author: Nick_Tarleton 02 December 2009 04:14:26AM 6 points [-]

Proving no buffer overflows would be nothing next to the other formal verification you'd be doing (I hope).

Comment author: DanArmak 02 December 2009 02:12:57AM 2 points [-]

I fully agree that C++ is much, much, worse than Java. The wonder is that people still use it for major new projects today. At least there are better options than Java available now (I don't know what the state of art was in 2002 that well).

If you got together an "above-genius-level" programming team, they could design and implement their own language while they were waiting for your FAI theory. Probably they would do it anyway on their own initiative. Programmers build languages all the time - a majority of today's popular languages started as a master programmer's free time hobby. (Tellingly, Java is among the few that didn't.)

A custom language built and maintained by a star team would be at least as good as any existing general-purpose one, because you would borrow design you liked and because programming language design is a relatively well explored area (incl. such things as compiler design). And you could fit the design to the FAI project's requirements: choosing a pre-existing language means finding one that happens to match your requirements.

Incidentally, all the good things about Java - including the parallelism support - are actually properties of the JVM, not of the Java the language; they're best used from other languages that compile to the JVM. If you said "we'll probably run on the JVM", that would have sounded much better than "we'll probably write in Java". Then you'll only have to contend with the CLR and LLVM fans :-)

Comment author: Eliezer_Yudkowsky 02 December 2009 04:27:23AM 4 points [-]

I don't think it will mostly be a coding problem. I think there'll be some algorithms, potentially quite complicated ones, that one will wish to implement at high speed, preferably with reproducible results (even in the face of multithreading and locks and such). And there will be a problem of reflecting on that code, and having the AI prove things about that code. But mostly, I suspect that most of the human-shaped content of the AI will not be low-level code.

Comment author: Eliezer_Yudkowsky 02 December 2009 09:17:08AM 0 points [-]

How's the JVM on concurrency these days? My loose impression was that it wasn't actually all that hot.

Comment author: mattnewport 02 December 2009 09:51:12AM *  2 points [-]

I think it's pretty fair to say that no language or runtime is that great on concurrency today. Coming up with a better way to program for many-core machines is probably the major area of research in language design today and there doesn't appear to be a consensus on the best approach yet.

I think a case could be made that the best problem a genius-level programmer could devote themselves to right now is how to effectively program for many-core architectures.

Comment author: Henrik_Jonsson 02 December 2009 07:50:39PM *  0 points [-]

My impression is that JVM is worse at concurrency than every other approach that's been tried so far.

Haskell and other functional programming languages has many promising ideas but isn't widely used in the industry AFAIK.

This presentation gives a good short overview of the current state of concurrency approaches.

Comment author: anonym 02 December 2009 08:29:48AM 0 points [-]

Speaking of things that aren't Java but run on the JVM, Scala is one such (really nice) language. It's designed and implemented by one of the people behind the javac compiler, Martin Odersky. The combination of excellent support for concurrency and functional programming would make it my language of choice for anything that I would have used Java for previously, and it seems like it would be worth considering for AI programming as well.

Comment author: komponisto 03 December 2009 08:54:34PM 1 point [-]

Today I would say that if you can write AI at all, you can write the code parts in C, because AI is not a coding problem.

Exactly -- which is why the sentence sounded so odd.

Comment author: Eliezer_Yudkowsky 03 December 2009 09:18:32PM 7 points [-]

Well, yes, Yudkowsky-2002 is supposed to sound odd to a modern LW reader.

Comment author: anonym 02 December 2009 06:51:09AM 0 points [-]

SYWTBASAIP always makes me think of Reid Barton -- which I imagine is probably quite a bit higher that EY meant to convey as a lower bound -- so I know what you mean.

Comment author: steven0461 02 December 2009 03:13:10AM *  10 points [-]

After some doubts as to ability to contribute and the like, I went to be an intern in this year's summer program. It was fun and I'm really glad I went. At the moment, I'm back there as a volunteer, mostly doing various writing tasks, like academic papers.

Getting to talk a lot to people immersed in these ideas has been both educational and motivating, much more so than following things through the internet. So I'd definitely recommend applying.

Also, the house has an awesome library that for some reason isn't being mentioned. :-)

Comment author: Morendil 02 December 2009 08:40:50AM 4 points [-]

Is that library's catalog available on a site like LibraryThing ?

If it isn't, please get one of those visiting fellows to spend as long as it takes entering ISBNs so that others can virtually browse your bookshelves.

Comment author: MBlume 05 December 2009 08:00:44AM *  7 points [-]
Comment author: Morendil 11 December 2009 09:32:22AM 2 points [-]

I've set up a SIAI account on LibraryThing, for a bunch of reasons even though I've not heard back from MBlume.

http://www.librarything.com/catalog/siai

The heuristic "it's easier to seek forgiveness than permission" seemed to apply, the upvotes on the comments below indicate interest, I wanted to separate my stuff from SIAI's but still have a Web 2.0-ish way to handle it, and information wants to be free.

If this was a mistake on my part, it's easily corrected.

Comment author: Morendil 05 December 2009 10:47:01PM 2 points [-]

Re anonym's comment, maybe you might like to set up a SIAI/LW LibraryThing account. I'll gladly donate the $25 to make it a lifetime account.

Comment author: anonym 05 December 2009 11:13:25PM *  2 points [-]

Also an easy way to introduce SIAI to new people who might be interested in learning more (and donating), because librarything recommends similar libraries and shows you which libraries contain a book, etc.

Comment author: Kevin 01 January 2010 07:21:08AM 1 point [-]

I like how the Wii games are included

Comment author: Morendil 05 December 2009 09:12:20AM 1 point [-]

Thanks !

Comment author: Matt_Duing 01 January 2010 06:49:21AM 0 points [-]

I second Morendil's thanks. This list provides a view of what material is being thought about and discussed by the SIAI volunteers, and I hope that it alleviates some of the concerns of potential applicants who are hesitating.

Comment author: anonym 05 December 2009 08:56:38AM 0 points [-]

If it's an option, please make the spreadsheet sortable. It would be much easier to browse if it were sorted by (location, creator), so all math books would be together, and books by the same author on the same topic would be together.

Thanks for making this available though. I enjoyed browsing and already bought one. You might consider putting Amazon links in there with an affiliate tag for SIAI.

Comment author: Morendil 05 December 2009 09:55:58AM 4 points [-]
Comment author: anonym 05 December 2009 08:14:43PM *  0 points [-]

Thanks, that's helpful, but the original spreadsheet being sortable would still be very useful, because the librarything doesn't have "shelf", so you can't sort and view all math books together, for example.

Comment author: wkvong 07 December 2009 04:27:17PM 1 point [-]

I've sorted MBlume's original list so that it displays all the books of the same location together...however some of the places (living room floor/shelf etc.) are a collection of books on different topics. I may sort them out another time.

Here it is: http://spreadsheets.google.com/pub?key=t5Fz_UEo8JLZyEFfUvJVvPA&output=html

Comment author: anonym 07 December 2009 04:52:06PM 0 points [-]

Thank you.

Comment author: [deleted] 05 December 2009 07:30:35AM *  0 points [-]

And make sure they use a barcode scanner. Given that books tend to have ISBN barcodes, it would be... irrational not to.

(If it seems to you like a matter of knowledge, not rationality, then take a little while to ponder how you could be wrong.)

Comment author: MBlume 05 December 2009 08:01:22AM *  0 points [-]

I'm using (at Andrew Hay's kind suggestion) Delicious Library 2 (sadly only available on Macintosh, but I happen to be an Apple fanboy) which integrates with my webcam to do all my barcode scanning for me.

Comment author: Yorick_Newsome 02 December 2009 06:40:50AM 2 points [-]

I had a dream where some friends and I invaded the "Less Wrong Library", and I agree it was most impressive. ...in my dream.

Comment author: MBlume 14 May 2010 06:07:55AM 1 point [-]

lol, so how does the real thing compare? =)

Comment author: Wei_Dai 02 December 2009 08:23:55AM 7 points [-]

This is a bit off topic, but I find it strange that for years I was unable to find many people interested in decision theory and anthropic reasoning (especially a decision theoretic approach to anthropic reasoning) to talk with, and now they're hot topics (relatively speaking) because they're considered matters of existential risk. Why aren't more people working on these questions just because they can't stand not knowing the answers?

Comment author: Wei_Dai 02 December 2009 11:55:48PM 4 points [-]

Ok, one possible answer to my own question: people who are interested just to satisfy their curiosity, tend to find an answer they like, and stop inquiring further, whereas people who have something to protect have a greater incentive to make sure the answer is actually correct.

For some reason, I can't stop trying to find flaws in every idea I come across, including my own, which causes me to fall out of this pattern.

Comment author: Eliezer_Yudkowsky 03 December 2009 12:05:35AM 0 points [-]

More like if a question activates Philosophy mode, then people just make stuff up at random like the greek philosophers did, unless they are modern philosophers, in which case they invent a modal logic.

Comment author: Tyrrell_McAllister 03 December 2009 12:45:03AM 7 points [-]

Ancient philosophy would look very different if the Greek philosophers had been making stuff up at random. Plato and Aristotle followed cognitive strategies, strategies that they (1) could communicate and (2) felt constrained to follow. For these reasons, I don't think that those philosophers could be characterized in general as "making stuff up".

Of course, they followed different strategies respectively, and they often couldn't communicate their feelings of constraint to one another. And of course their strategies often just didn't work.

Comment author: Eliezer_Yudkowsky 03 December 2009 12:04:42AM 6 points [-]

You might as well ask why all the people wondering if all mathematical objects exist, haven't noticed the difference in style between the relation between logical axioms and logical models versus causal laws and causal processes.

If something isn't a cached reply to the question "What should I think about?" then it's just not surprising if no one is thinking about it. People are crazy, the world is mad.

Comment author: Wei_Dai 04 December 2009 01:19:07AM 7 points [-]

People are crazy, the world is mad.

Eliezer, it makes me nervous when my behavior or reasoning differs from the vast majority of human beings. Surely that's a reasonable concern? Knowing that people are crazy and the world is mad helps a bit, but not too much because people who are even crazier than average probably explain their disagreements with the world in exactly this way.

So, I'm inclined to try to find more detailed explanations of the differences. Is there any reason you can think of why that might be unproductive, or otherwise a bad idea?

Comment author: Eliezer_Yudkowsky 04 December 2009 07:07:36AM 6 points [-]

Eliezer, it makes me nervous when my behavior or reasoning differs from the vast majority of human beings. Surely that's a reasonable concern?

On this planet? No. On this planet, I think you're better off just worrying about the object-level state of the evidence. Your visceral nervousness has nothing to do with Aumann. It is conformity.

Knowing that people are crazy and the world is mad helps a bit, but not too much because people who are even crazier than average probably explain their disagreements with the world in exactly this way.

What do you care what people who are crazier than average do? You already have enough information to know you're not one of them. You care what these people do, not because you really truly seriously think you might be one of them, but because of the gut-level, bone-deep fear of losing status by seeming to affiliate with a low-prestige group by saying something that sounds similar to what they say. You may be reluctant to admit that you know perfectly well you're not in this group, because that also sounds like something this low-prestige group would say; but in real life, you have enough info, you know you have enough info, and the thought has not seriously crossed your mind in a good long while, whatever your dutiful doubts of your foregone conclusion.

Seriously, just make the break, clean snap, over and done.

So, I'm inclined to try to find more detailed explanations of the differences. Is there any reason you can think of why that might be unproductive, or otherwise a bad idea?

Occam's Imaginary Razor. Spending lots of time on the meta-level explaining away what other people think is bad for your mental health.

Comment author: Wei_Dai 04 December 2009 11:47:30AM *  4 points [-]

You're wrong, Elizer. I am sure that I'm not crazier than average, and I'm not reluctant to admit that. But in order to disagree with most of the world, I have to have good reason to think that I'm more rational than everyone I disagree with, or have some other explanation that lets me ignore Aumann. The only reason I referred to people who are crazier than average is to explain why "people are crazy, the world is mad" is not one of those explanations.

Spending lots of time on the meta-level explaining away what other people think is bad for your mental health.

That's only true if I'm looking for rationalizations, instead of real explanations, right? If so, noted, and I'll try to be careful.

Comment author: Eliezer_Yudkowsky 04 December 2009 01:00:38PM 4 points [-]

But in order to disagree with most of the world, I have to have good reason to think that I'm more rational than everyone I disagree with

You're more rational than the vast majority of people you disagree with. There, I told you up front. Is that reason enough? I can understand why you'd doubt yourself, but why should you doubt me?

That's only true if I'm looking for rationalizations, instead of real explanations, right? If so, noted, and I'll try to be careful.

I'm not saying that you should deliberately stay ignorant or avoid thinking about it, but I suspect that some of the mental health effects of spending lots of time analyzing away other people's disagreements would happen to you even if you miraculously zeroed in on the true answer every time. Which you won't. So it may not be wise to deliberately invest extra thought-time here.

Or maybe divide healthy and risky as follows: Healthy is what you do when you have a serious doubt and are moving to resolve it, for example by reading more of the literature, not to fulfill a duty or prove something to yourself, but because you seriously think there may be stuff out there you haven't read. Risky is anything you do because you want to have investigated in order to prove your own rationality to yourself, or because it would feel too immodest to just think outright that you had the right answer.

The only reason I referred to people who are crazier than average is to explain why "people are crazy, the world is mad" is not one of those explanations.

It is if you stick to the object level. Does it help if I rephrase it as "People are crazy, the world is mad, therefore everyone has to show their work"? You just shouldn't have to spend all that much effort to suppose that a large number of people have been incompetent. It happens so frequently that if there were a Shannon code for describing Earth, "they're nuts" would have a single-symbol code in the language. Now, if you seriously don't know whether someone else knows something you don't, then figure out where to look and look there. But the answer may just be "4", which stands for Standard Explanation #4 in the Earth Description Language: "People are crazy, the world is mad". And in that case, spending lots of effort in order to develop an elaborate dismissal of their reasons is probably not good for your mental health and will just slow you down later if it turns out they did know something else. If by a flash of insight you realize there's a compact description of a mistake that a lot of other people are making, then this is a valuable thing to know so you can avoid it yourself; but I really think it's important to learn how to just say "4" and move on.

Comment author: RobinHanson 04 December 2009 02:25:21PM 8 points [-]

I will come as a surprise to few people that I disagree strongly with Eliezer here; Wei should not take his word for the claim that Wei is so much more rational than all the folks he might disagree with that he can ignore their differing opinions. Where is this robust rationality test used to compare Wei to the rest of the intellectual world? Where is the evidence for this supposed mental health risk of considering the important evidence of the opinions of other? If the world is crazy, then very likely so are you. Yes it is a good sign if you can show some of your work, but you can almost never show all of your relevant work. So we must make inferences about the thought we have not seen.

Comment author: Eliezer_Yudkowsky 04 December 2009 03:01:18PM 7 points [-]

Well, I think we both agree on the dangers of a wide variety of cheap talk - or to put it more humbly, you taught me on the subject. Though even before then, I had developed the unfortunate personal habit of calling people's bluffs.

So while we can certainly interpret talk about modesty and immodesty in terms of rhetoric, isn't the main testable prediction at stake, the degree to which Wei Dai should often find, on further investigation, that people who disagree with him turn out to have surprisingly good reasons to do so?

Do you think - to jump all the way back to the original question - that if Dai went around asking people "Why aren't you working on decision theory and anthropics because you can't stand not knowing the answers?" that they would have some brilliantly decisive comeback that Dai never thought of which makes Dai realize that he shouldn't be spending time on the topic either? What odds would you bet at?

Comment author: RobinHanson 05 December 2009 04:32:49AM 9 points [-]

Brilliant decisive reasons are rare for most topics, and most people can't articulate very many of their reasons for most of their choices. Their most common reason would probably be that they found other topics more interesting, and to evaluate that reason Wei would have to understand the reasons for thinking all those other topics interesting. Saying "if you can't prove to me why I'm wrong in ten minutes I must be right" is not a very reliable path to truth.

Comment author: CronoDAS 05 December 2009 08:07:00PM 1 point [-]

I'd expect a lot of people to answer "Nobody is paying me to work on it."

Comment author: aausch 04 December 2009 05:03:07AM 1 point [-]

I typically class these types of questions with other similar ones:

What are the odds that a strategy of approximately continuous insanity, interrupted by clear thinking, is a better evolutionary adaptation than continuous sanity, interrupted by short bursts of madness? That the first, in practical, real-world terms, causes me to lead a more moral or satisfying life? Or even, that the computational resources that my brain provides to me as black boxes, can only be accessed at anywhere near peak capacity when I am functioning in a state of madness?

Is it easier to be sane, emulating insanity when required to, or to be insane, emulating sanity when required to?

Comment author: byrnema 04 December 2009 04:45:36AM *  0 points [-]

Given that we're sentient products of evolution, shouldn't we expect a lot of variation in our thinking?

Finding solutions to real world problems often involve searching through a space of possibilities that is too big and too complex to search systematically and exhaustively. Evolution optimizes searches in this context by using a random search with many trials: inherent variation among zillions of modular components. I hypothesize that we individually think in non-rational ways so that as a population we search through state space for solutions in a more random way.

Observing the world for 32-odd years, it appears to me that each human being is randomly imprinted with a way of thinking and a set of ideas to obsess about. (Einstein had a cluster of ideas that were extremely useful for 20th century physics, most people's obsessions aren't historically significant.)

Comment author: GuySrinivasan 04 December 2009 05:08:14AM 2 points [-]

Why would evolution's search results tend to search in the same way evolution searches?

Comment author: byrnema 04 December 2009 06:03:34AM 0 points [-]

They search in the same way because random sampling via variability is an effective way to search. However, humans could perform effective searches by variation at the individual or population level (for example, a sentient creature could model all different kinds of thought to think of different solutions) but I was arguing for the variation at the population level.

Variability at the population level is explained by the fact that we are products of evolution.

Of course, human searches are effective as a result of both kinds of variation.

Not that any of this was thought out before your question... This the usual networked-thought-reasoning verses linear-written-argument mapping problem.

Comment author: GuySrinivasan 04 December 2009 06:14:13AM 0 points [-]

Heh, I came to a similar thought walking home after asking the question... that it seems at least plausible the only kinda powerful optimization processes that are simple enough to pop up randomlyish are the ones that do random sampling via variability.

I'm not sure it makes sense that variability at the population level is much explained by coming from evolution, though. Seems to me, as a bound, we just don't have enough points in the search space to be worth it even with 6b minds, and especially not down at the population levels during most of evolution. Then there's the whole difficulty with group selection, of course. My intuition says no... yours says yes though?

Comment author: Liron 04 December 2009 08:19:32AM 2 points [-]

I hypothesize that we individually think in non-rational ways so that as a population we search through state space for solutions in a more random way.

That's a group selection argument.

GAME OVER

Comment author: Kaj_Sotala 04 December 2009 09:21:59AM 2 points [-]

Is it necessarily? Consider a population dominated by individuals with an allele for thinking in a uniform fashion. Then insert individuals who will come up with original ideas. A lot of the original ideas are going to be false, but some of them might hit the right spot and confer an advantage. It's a risky, high variance strategy - the bearers of the originality alleles might not end up as the majority, but might not be selected out of the population either.

Comment author: Eliezer_Yudkowsky 04 December 2009 09:26:26AM 4 points [-]

Sure, you can resurrect it as a high-variance high-expected-value individual strategy with polymorphism maintained by frequency-dependent selection... but then there's still no reason to expect original thinking to be less rational thinking. And the original hypothesis was indeed group selection, so byrnema loses the right to talk about evolutionary psychology for one month or something. http://wiki.lesswrong.com/wiki/Group_selection

Comment author: byrnema 04 December 2009 01:10:41PM 5 points [-]

It seems to be extremely popular among a certain sort of amateur evolutionary theorist, though - there's a certain sort of person who, if they don't know about the incredible mathematical difficulty, will find it very satisfying to speculate about adaptations for the good of the group.

That's me. I don't know anything about evolutionary biology -- I'm not even an amateur. Group selection sounded quite reasonable to me, and now I know that it isn't borne by observation or the math. I can't jump into evolutionary arguments; moratorium accepted.

Comment author: timtyler 09 December 2009 03:17:11PM 2 points [-]

See:

"As a result many are beginning to recognize that group selection, or more appropriately multilevel selection, is potentially an important force in evolution."

Comment author: Alicorn 04 December 2009 01:52:20PM *  0 points [-]

I'm no evo-bio expert, but it seems like you could make it work as something of a kin selection strategy too. If you don't think exactly like your family, then when your family does something collaborative, the odds that one of you has the right idea is higher. Families do often work together on tasks; the more the family that thinks differently succeeds, the better they and their think-about-random-nonconforming-things genes do. Or does assuming that families will often collaborate and postulating mechanisms to make that go well count as a group selection hypothesis?

Comment author: Eliezer_Yudkowsky 04 December 2009 02:57:02PM 9 points [-]

Anecdotally, it seems to me that across tribes and families, people are less likely to try to occupy a niche that already looks filled. (Which of course would be a matter of individual advantage, not tribal advantage!) Some of the people around me may have failed to enter their area of greatest comparative advantage, because even though they were smarter than average, I looked smarter.

Example anecdote: A close childhood friend who wanted to be a lawyer was told by his parents that he might not be smart enough because "he's not Eliezer Yudkowsky". I heard this, hooted, and told my friend to tell his parents that I said he was plenty smart enough. He became a lawyer.

Comment author: andrewbreese 31 January 2011 05:04:01AM 3 points [-]

THAT had a tragic ending!

He became a lawyer.

Comment author: Tyrrell_McAllister 03 December 2009 12:50:14AM 7 points [-]

You might as well ask why all the people wondering if all mathematical objects exist, haven't noticed the difference in style between the relation between logical axioms and logical models versus causal laws and causal processes.

If you'd read much work by modern mathematical platonists, you'd know that many of them obsess over such differences, at least in the analytical school. (Not that it's worth your time to read such work. You don't need to do that to infer that they are likely wrong in their conclusions. But not reading it means that you aren't in a position to declare confidently how "all" of them think.)

Comment author: Eliezer_Yudkowsky 03 December 2009 05:35:32AM 1 point [-]

Interesting. I wonder if you've misinterpreted me or if there's actually someone competent out there? Quick example if possible?

Comment author: Tyrrell_McAllister 03 December 2009 11:55:22PM 5 points [-]

Interesting. I wonder if you've misinterpreted me or if there's actually someone competent out there? Quick example if possible?

Heh, false dilemma, I'm afraid :). My only point was that modern platonists aren't making the mistake that you described. They still make plenty of other mistakes.

Mathematical platonists are "incompetent" in the sense that they draw incorrect conclusions (e.g., mathematical platonism). In fact, all philosophers of mathematics whom I've read, even the non-platonists, make the mistake of thinking that physical facts are contingent in some objective sense in which mathematical facts are not. Not that this is believed unanimously. For example, I gather that John Stewart Mill held that mathematical facts are no more necessary than physical ones, but I haven't read him, so I don't know the details of his view.

But all mathematical philosophers whom I know recognize that logical relations are different from causal relations. They realize that Euclid's axioms "make" the angles in ideal triangles sum to 180 degrees in a manner very different from how the laws of physics make a window break when a brick hits it. For example, mathematical platonists might say (mistakenly) that every mathematically possible object exists, but not every physically possible object exists.

Another key difference for the platonist is that causal relations don't hold among mathematical objects, or between mathematical objects and physical objects. They recognize that they have a special burden to explain how we can know about mathematical objects if we can't have any causal interaction with them.

http://plato.stanford.edu/entries/platonism-mathematics/#WhaMatPla http://plato.stanford.edu/entries/abstract-objects/#5

Comment author: Vladimir_Nesov 04 December 2009 12:34:33AM *  2 points [-]

I'd appreciate it if you write down your positions explicitly, even if in one-sentence form, rather than implying that so-and-so position is wrong because [exercise to the reader]. These are difficult questions, so even communicating what you mean is non-trivial, not even talking about convincing arguments and rigorous formulations.

Comment author: Tyrrell_McAllister 04 December 2009 01:22:01AM 6 points [-]

That's fair. I wrote something about my own position here.

Here is what I called mistaken, and why:

  1. Mathematical platonism: I believe that we can't know about something unless we can interact with it causally.

  2. The belief that physical facts are contingent: I believe that this is just an example of the mind projection fallacy. A fact is contingent only with respect to a theory. In particular, the fact is contingent if the theory neither predicts that it must be the case nor that it must not be the case. Things are not contingent in themselves, independently of our theorizing. They just are. To say that something is contingent, like saying that it is surprising, is to say something about our state of knowledge. Hence, to attribute contingency to things in themselves is to commit the mind projection fallacy.

Comment author: byrnema 04 December 2009 02:04:58AM *  2 points [-]

My interest is piqued as well. You appear to be articulating a position that I've encountered on Less Wrong before, and that I would like to understand better.

So physical facts are not contingent. All of them just happen to be independently false or true? What then is the status of a theory?

I'm speculating.. perhaps you consider that there is a huge space of possible logical and consistent theories, one for every (independent) fact being true or false. (For example, if there are N statements about the physical universe, 2^N theories.) Of course, they are all completely relatively arbitrary. As we learn about the universe, we pick among theories that happen to explain all the facts that we know of (and we have preferences for theories that do so in ever simpler ways.) Then, any new fact may require updating to a new theory, or may be consistent with the current one. So theories are arbitrary but useful. Is this consistent with what you are saying?

Thank you. I apologize if I've misinterpreted -- I suspect the inferential distance between our views is quite great.

Comment author: Tyrrell_McAllister 06 December 2009 04:36:13AM 1 point [-]

Let me start with my slogan-version of my brand of realism: "Things are a certain way. They are not some other way."

I'll admit up front the limits of this slogan. It fails to address at least the following: (1) What are these "things" that are a certain way? (2) What is a "way", of which "things are" one? In particular (3) what is the ontological status of the other ways aside from the "certain way" that "things are"? I don't have fully satisfactory answers to these questions. But the following might make my meaning somewhat more clear.

To your questions:

So physical facts are not contingent. All of them just happen to be independently false or true?

First, let me clear up a possible confusion. I'm using "contingent" in the sense of "not necessarily true or necessarily false". I'm not using it in the sense of "dependent on something else". That said, I take independence, like contingency, to be a theory-relative term. Things just are as they are. In and of themselves, there are no relations of dependence or independence among them.

What then is the status of a theory?

Theories are mechanisms for generating assertions about how things are or would be under various conditions. A theory can be more or less wrong depending on the accuracy of the assertions that it generates.

Theories are not mere lists of assertions (or "facts"). All theories that I know of induce a structure of dependency among their assertions. That structure is a product of the theory, though. (And this relation between the structure and the theory is itself a product of my theory of theories, and so on.)

I should try to clarify what I mean by a "dependency". I mean something like logical dependency. I mean the relation that holds between two statements, P and Q, when we say "The reason that P is true is because Q is true".

Not all notions of "dependency" are theory-dependent in this sense. I believe that "the way things are" can be analyzed into pieces, and these pieces objectively stand in certain relations with one another. To give a prosaic example. The cup in front of me is really there, the table in front of me is really there, and the cup really sits in the relation of "being on" the table. If a cat knocks the cup off the table, an objective relation of causation will exist between the cat's pushing the cup and the cup's falling off the table. All this would be the case without my theorizing. These are facts about the way things are. We need a theory to know them, but they aren't mere features of our theory.

Comment author: Eliezer_Yudkowsky 04 December 2009 07:28:27AM 0 points [-]

Checking these references doesn't show the distinction I was thinking of between the mathematical form of first-order or higher-order logic and model theory, versus causality a la Pearl.

Comment author: Tyrrell_McAllister 04 December 2009 02:57:10PM *  1 point [-]

So, is your complaint just that they use the same formalism to talk about logical relations and causal relations? Or even just that they don't use the same two specific formalisms that you use?

That seems to me like a red herring. Pearl's causal networks can be encoded in ZFC. Conversely, ZFC can be talked about using various kinds of decorated networks --- that's what category theory is. Using the same formalism for the two different kinds of relations should only be a problem if it leads one to ignore the differences between them. As I tried to show above, philosophers of mathematics aren't making this mistake in general. They are keenly aware of differences between logical relations and causal relations. In fact, many would point to differences that don't, in my view, actually exist.

And besides, I don't get the impression that philosophers these days consider nth- order logic to be the formalism for physical explanations. As mentioned on the Wikipedia page for the deductive-nomological model, it doesn't hold the dominant position that it once had.

Comment author: Eliezer_Yudkowsky 04 December 2009 07:05:57PM 1 point [-]

Pearl's causal networks can be encoded in ZFC

That's what I would expect most mathematical-existence types to think. It's true, but it's also the wrong thought.

Wei, do you see it now that I've pointed it out? Or does anyone else see it? As problems in philosophy go, it seems like a reasonable practice exercise to see it once I've pointed to it but before I've explained it.

Comment author: Liron 04 December 2009 09:36:03PM 2 points [-]

Is this it:

In logic, any time you have a set of axioms from which it is impossible to derive a contradiction, a model exists about which all the axioms are true. Here, "X exists" means that you can prove, by construction, that an existentially quantified proposition about some model X is true in models of set theory. So all consistent models are defined into "existence".

A causal process is an unfolded computation. Parts of its structure have relationships that are logically constrained, if not fully determined, by other parts. But like any computation, you can put an infinite variety of inputs on the tape of the Causality Turing machine's tape, and you'll get a different causal process. Here, "X exists" means that X is a part of the same causal process that you are a part of. So you have to entangle with your surroundings in order to judge what "exists".

Comment author: Wei_Dai 05 December 2009 07:13:40PM 1 point [-]

Eliezer, I still don't understand Pearl well enough to answer your question. Did anyone else get it?

Right now I'm working on the following related question, and would appreciate any ideas. Some very smart people have worked hard on causality for years, but UDT1 seemingly does fine without an explicit notion of causality. Why is that, or is there a flaw in it that I'm not seeing? Eliezer suggested earlier that causality is a way of cashing out the "mathematical intuition module" in UDT1. I'm still trying to see if that really makes sense. It would be surprising if mathematical intuition is so closely related to causality, which seems to be very different at first glance.

Comment author: Vladimir_Nesov 04 December 2009 08:11:06PM 1 point [-]

It's still unclear what you mean. One simple idea is that many formalisms allow to express each other, but some give more natural ways of representing a given problem than others. In some contexts, a given way of stating things may be clearly superior. If you e.g. see math as something happening in heads of mathematicians, or see implication of classical logic as a certain idealization of material implication where nothing changes, one may argue that a given way is more fundamental, closer to what actually happens.

When you ask questions like "do you see it now?", I doubt there is even a good way of interpreting them as having definite answers, without already knowing what you expect to hear, a lot more context about what kinds of things you are thinking about than is generally available.

Comment author: Tyrrell_McAllister 06 December 2009 03:33:13AM *  0 points [-]

That's what I would expect most mathematical-existence types to think. It's true, but it's also the wrong thought.

Wei, do you see it now that I've pointed it out? Or does anyone else see it?

I take this to be your point:

Suppose that you want to understand causation better. Your first problem is that your concept of causation is still vague, so you try to develop a formalism to talk about causation more precisely. However, despite the vagueness, your topic is sufficiently well-specified that it's possible to say false things about it.

In this case, choosing the wrong language (e.g., ZFC) in which to express your formalism can be fatal. This is because a language such as ZFC makes it easy to construct some formalisms but difficult to construct others. It happens to be the case that ZFC makes it much easier to construct wrong formalisms for causation than does, say, the language of networks.

Making matters worse, humans have a tendency to be attracted to impressive-looking formalisms that easily generate unambiguous answers. ZFC-based formalisms can look impressive and generate unambiguous answers. But the answers are likely to be wrong because the formalisms that are natural to construct in ZFC don't capture the way that causation actually works.

Since you started out with a vague understanding of causation, you'll be unable to recognize that your formalism has led you astray. And so you wind up worse than you started, convinced of false beliefs rather than merely ignorant. Since understanding causation is so important, this can be a fatal mistake.

--- So, that's all well and good, but it isn't relevant to this discussion. Philosophers of mathematics might make a lot of mistakes. And maybe some have made the mistake of trying to use ZFC to talk about physical causation. But few, if any, haven't "noticed the difference in style between the relation between logical axioms and logical models versus causal laws and causal processes." That just isn't among the vast catalogue of their errors.

Comment author: Tyrrell_McAllister 04 December 2009 09:10:14PM *  0 points [-]

That's what I would expect most mathematical-existence types to think. It's true, but it's also the wrong thought.

Perhaps, but irrelevant, because I'm not what you would call a mathematical-existence type.

ETA: The point is that you can't be confident about what thought stands behind the sentence "Pearl's causal networks can be encoded in ZFC" until you have some familiarity with how the speaker thinks. On what basis do you claim that familiarity?

Comment author: Vladimir_Nesov 03 December 2009 09:11:41AM 5 points [-]

haven't noticed the difference in style between the relation between logical axioms and logical models versus causal laws and causal processes.

Amplify?

Comment author: DanArmak 04 December 2009 12:38:13AM 1 point [-]

Upvoted. I don't understand what difference is meant.

Comment author: whpearson 03 December 2009 09:57:38AM 1 point [-]

I'm curious why you find it interesting. To me pure decision theory is an artifact of language. We have the language constructs to describe situations and their outcomes for communicating with other humans and because of this try to make formalisms that take the model/utility as inputs.

In a real intelligence I expect decisions to be made on an ad hoc local basis for efficiency reasons. In an evolved creature the expected energy gain from theoretically sound decisions could easily be less than the energetic cost of the extra computation.

Comment author: Wei_Dai 04 December 2009 02:52:41AM 8 points [-]

I'm probably not the best person to explain why decision theory is interesting from an FAI perspective. For that you'd want to ask Eliezer or other SIAI folks. But I think the short answer there is that without a well-defined decision theory for an AI, we can't hope to prove that it has any Friendliness properties.

My own interest in decision theory is mainly philosophical. Originally, I wanted to understand how probabilities should work when there are multiple copies of oneself, either due to mind copying technology, or because all possible universes exist. That led me to ask, "what are probabilities, anyway?" The philosophy of probability is its own subfield in philosophy, but I came to the conclusion that probabilities only have meaning within a decision theory, so the real question I should be asking is what kind of decision theory one should use when there are multiple copies of oneself.

Comment author: Eliezer_Yudkowsky 04 December 2009 03:02:45PM 9 points [-]

Your own answer is also pretty relevant to FAI. Because anything that confuses you can turn out to contain the black box surprise from hell.

Until you know, you don't know if you need to know, you don't know how much you need to know, and you don't know the penalty for not knowing.

Comment author: whpearson 07 December 2009 02:31:13PM 1 point [-]

Thanks.

I'll try and explain a bit more why I am not very interested in probabilities and DTs. I am interested in how decisions are made, but I am far more interested in how an agent gets to have a certain model in the first place (before it is converted into an action). With a finite agent there are questions such as why have model X rather than Y. Which I think impinges on the question on what topics we should discuss. I'd view most people not having a low probability that DTs are important, but simply not storing a probability for that preposition at all. They have never explored it so have no evidence either way.

The model of the world you have can dominate the DT, in determining the action taken. And in the end that is what we care about, the action taken in response to the input and history.

I also think that DT with its fixed model ignores the possibility of communication between the bit running through the model and picking an action and the bit that creates the model. For example if I see a very good contest/offer I might think it too good to be true, and look for more information to alter my model and find the catch before taking the offer up.

Comment author: Sebastian_Hagen 07 December 2009 03:36:17PM *  2 points [-]

For example if I see a very good contest/offer I might think it too good to be true, and look for more information to alter my model and find the catch before taking the offer up.

How is this case different from any other decision? You compute the current probabilities for this is a fraud and this is an unusually good deal. You compute the cost of collecting more data in a specific fashion, and the probability distribution over possible futures containing a future version of you with better knowledge about this problem. You do the same for various alternative actions you could take instead of collecting more data right now, calculate expected long-run utility for each of the considered possible futures, and choose an action based on that information - either to prod the universe to give you more data about this, or doing something else.

I am glossing over all the interesting hard parts, of course. But still, is there anything fundamentally different about manipulating the expected state of knowledge of your future-self from manipulating any other part of reality?

Comment author: whpearson 07 December 2009 11:14:13PM *  1 point [-]

Interesting question. Not quite what I was getting at. I hope you don't mind if I use a situation where extra processing can get you more information.

A normal decision theory can be represented as simple function from model to action. It should halt.

decisiontheory :: Model -> Action

Lets say you have a model that you can keep on expanding the consequences of and get a more accurate picture of what is going to happen, like playing chess with a variable amount of look ahead. What the system is looking for is a program that will recursively self improve and be Friendly (where making an action is considered making an AI).

It has a function that can either carry on expanding the model or return an action.

modelOrAct :: Model -> Either Action Model

You can implement decisiontheory with this code

decisiontheory :: Model -> Action

decisiontheory m = either (decisionModel) (id) (modelOrAct m)

However this has the potential to infinite loop due to its recursive definition. This would happen if the expected utility of increasing the accuracy of the model is greater than performing an action and there is no program it can prove safe. You would want some way of interrupting it to update the model with information from the real world as well as the extrapolation.

So I suppose the difference in this case is that due to making a choice on which mental actions to perform you can get stuck not getting information from the world about real world actions.

Comment author: wedrifid 07 December 2009 03:28:12PM *  0 points [-]

The model of the world you have can dominate the DT, in determining the action taken. And in the end that is what we care about, the action taken in response to the input and history.

No, the model of the world you have can not dominate the DT or, for that matter, do anything at all. There must be a decision theory either explicit or implicit in some action generating algorithm that you are running. Then it is just a matter of how much much effort you wish to spend developing each.

I also think that DT with its fixed model ignores the possibility of communication between the bit running through the model and picking an action and the bit that creates the model. For example if I see a very good contest/offer I might think it too good to be true, and look for more information to alter my model and find the catch before taking the offer up.

A Decision Theory doesn't make you naive or impractical. Deciding to look for more information is just a good decision.

Comment author: whpearson 08 December 2009 08:44:37AM *  0 points [-]

No, the model of the world you have can not dominate the DT or, for that matter, do anything at all. There must be a decision theory either explicit or implicit in some action generating algorithm that you are running. Then it is just a matter of how much much effort you wish to spend developing each.

I spoke imprecisely. I meant that the part of the program that generates the model of the world dominates the DT in terms of what action is taken. That is; with a fixed DT you can make it perform any action dependent upon what model you give it. The converse is not true as the model constrains the possible actions.

A Decision Theory doesn't make you naive or impractical. Deciding to look for more information is just a good decision.

I think in terms of code and Types. Most discussions of DTs don't have discussions of feeding back the utilities to the model making section, so I'm assuming a simple type. It might be wrong, but at least I can be precise about what I am talking about. See my reply to Sebastian.

Comment author: Yorick_Newsome 02 December 2009 11:25:56AM 1 point [-]

Maybe I'm wrong, but it seems most people here follow the decision theory discussions just for fun. Until introduced, we just didn't know it was so interesting! That's my take anyways.

Comment author: alyssavance 01 December 2009 01:53:57AM 7 points [-]

"Getting good popular writing and videos on the web, of sorts that improve AI risks understanding for key groups;"

Though good popular writing is, of course, very important, I think we sometimes overestimate the value of producing summaries/rehashings of earlier writing by Vinge, Kurzweil, Eliezer, Michael Vassar and Anissimov, etc.

Comment author: outlawpoet 01 December 2009 01:58:03AM *  1 point [-]

I must agree with this, although video and most writing OTHER than short essays and polemics would be mostly novel, and interesting.

Comment author: MichaelAnissimov 02 December 2009 02:42:18AM 6 points [-]

I participated in the 2008 summer intern program and visited the 2009 program several times and thought it was a lot of fun and very educational. The ideas that I bounced off of people at these programs still inform my writing and thinking now.

Comment author: nhamann 03 December 2009 06:18:26AM 5 points [-]

I have a (probably stupid) question. I have been following Less Wrong for a little over a month, and I've learned a great deal about rationality in the meantime. My main interest, however, is not rationality, it is in creating FAI. I see that the SIAI has an outline of a research program, described here: http://www.singinst.org/research/researchareas.

Is there an online community that is dedicated solely to discussing friendly AI research topics? If not, is the creation of one being planned? If not, why not? I realize that the purpose of these SIAI fellowships is to foster such research, but I'd imagine that a discussion community focused on relevant topics in evolutionary psych, cogsci, math, CS, etc. would provide a great deal more stimulation for FAI research than would the likely limited number of fellowships available.

A second benefit would be that it would provide a support group to people (like me) who want to do FAI research but who do not know enough about cogsci, math, CS, etc. to be of much use to SIAI at the moment. I have started combing through SIAI's reading list, which has been invaluable in narrowing down what I need to be reading, but at the end of the day, it's only a reading list. What would be ideal is an active community full of bright and similarly-motivated people who could help to clarify misconceptions and point out novel connections in the material.

I apologize if this comment is off-topic.

Comment author: Kaj_Sotala 03 December 2009 08:57:05AM 4 points [-]

Is there an online community that is dedicated solely to discussing friendly AI research topics?

None that would be active and of a high quality. SL4 is probably the closest, but these days it's kinda quiet and the discussions aren't very good. Part of the problem seems to be that a community dedicated purely to FAI draws too many cranks. Now, even if you had active moderation, it's often pretty hard for people in general to come up with good, original questions and topics of FAI. SL4 is dead partly because the people there are tired of basically rehashing the same topics over and over. It seems like having FAI discussion on the side of an established rationalist community is a good idea, both to drive out the cranks and to have other kinds of discussion going on that isn't directly relevant to FAI might still contribute to an understanding of the topic indirectly.

Comment author: Vladimir_Nesov 03 December 2009 10:04:59AM 2 points [-]

This forum is as close as there is to a FAI discussion group. SL4 is very much (brain-)dead at the moment. There aren't even a lot of people who are known to be specifically attacking the FAI problem -- one can name Yudkowsky, Herreshoff, maybe Rayhawk, others keep quiet. Drop me a mail, I may have some suggestions on what to study.

Comment author: righteousreason 07 December 2009 01:40:28AM 1 point [-]
Comment author: righteousreason 04 December 2009 02:01:19PM 1 point [-]

Whatever happened to Nick Hay, wasn't he doing some kind of FAI related research?

Comment author: CarlShulman 04 December 2009 02:49:47PM 2 points [-]

He is at Berkeley working under Stuart Russell (of AI: A Modern Approach, among other things).

Comment author: whpearson 01 December 2009 11:01:00PM 5 points [-]

I really like what SIAI is trying to do, the spirit that it embodies.

However I am getting more skeptical of any projections or projects based on non-good old fashioned scientific knowledge (my own included).

You can progress scientifically to make AI if you copy human architecture somewhat. By making predictions about how the brain works and organises itself. However I don't see how we can hope make significant progress on non-human AI. How will we test whether our theories are correct or on the right path? For example, what evidence from the real world would convince the SIAI to abandon the search for a fixed decision theory as a module of the AI. And why isn't SIAI looking for the evidence, to make sure that you aren't wasting your time?

For every Einstein that makes the "right" cognitive leap there are probably many orders of magnitudes of more Kelvin's that do things like predict that meteors provide fuel for the sun.

How are you going to winnow out the wrong ideas if they are consistent with everything we know, especially if they are pure mathematical constructs.

Comment author: AngryParsley 04 December 2009 09:20:06AM *  4 points [-]

You can progress scientifically to make AI if you copy human architecture somewhat.

I think you're making the mistake of relying too heavily on our one sample of a general intelligence: the human brain. How do we know which parts to copy and which parts to discard? To draw an analogy to flight, how can we tell which parts of the brain are equivalent to a bird's beak and which parts are equivalent to wings? We need to understand intelligence before we can successfully implement it. Research on the human brain is expensive, requires going through a lot of red tape, and it's already being done by other groups. More importantly, planes do not fly because they are similar to birds. Planes fly because we figured out a theory of aerodynamics. Planes would fly just as well if no birds ever existed, and explaining aerodynamics doesn't require any talk of birds.

I don't see how we can hope make significant progress on non-human AI. How will we test whether our theories are correct or on the right path?

I don't see how we can hope to make significant progress on non-bird flight. How will we test whether our theories are correct or on the right path?

Just because you can't think of a way to solve a problem doesn't mean that a solution is intractable. We don't yet have the equivalent of a theory of aerodynamics for intelligence, but we do know that it is a computational process. Any algorithm, including whatever makes up intelligence, can be expressed mathematically.

As to the rest of your comment, I can't really respond to the questions about SIAI's behavior, since I don't know much about what they're up to.

Comment author: Jordan 04 December 2009 10:10:34AM 1 point [-]

The bird analogy rubs me the wrong way more and more. I really don't think it's a fair comparison. Flight is based on some pretty simple principles, intelligence not necessarily so. If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI. Certainly intelligence might have some nice underlying theory, so we should pursue that angle as well, but I don't see how we can be certain either way.

Comment author: AngryParsley 04 December 2009 06:55:08PM *  5 points [-]

Flight is based on some pretty simple principles, intelligence not necessarily so.

I think the analogy still maps even if this is true. We can't build useful AIs until we really understand intelligence. This holds no matter how complicated intelligence ends up being.

If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI.

First, nothing is "fundamentally complex." (See the reductionism sequence.) Second, brain emulation won't work for FAI because humans are not stable goal systems over long periods of time.

Comment author: Jordan 05 December 2009 02:19:44AM 1 point [-]

We can't build useful AIs until we really understand intelligence.

You're overreaching. Uploads could clearly be useful, whether we understand how they are working or not.

brain emulation won't work for FAI because humans are not stable goal systems over long periods of time.

Agreed, uploads aren't provably friendly. But you have to weigh that danger against the danger of AGI arriving before FAI.

Comment author: Vladimir_Nesov 05 December 2009 02:23:06AM *  3 points [-]

Agreed, uploads aren't provably friendly. But you have to weigh that danger against the danger of AGI arriving before FAI.

But you still can't get to FAI unless you (or the uploads) understand intelligence.

Comment author: Jordan 05 December 2009 10:01:52AM *  2 points [-]

Right, the two things you must weigh and 'choose' between (in the sense of research, advocacy, etc):

1) Go for FAI, with the chance that AGI comes first

2) Go for uploads, with the chance they go crazy when self modifying

You don't get provable friendless with uploads without understanding intelligence, but you do get a potential upgrade path to super intelligence that doesn't result in the total destruction of humanity. The safety of that path may be small, but the probability of developing FAI before AGI is likewise small, so it's not clear in my mind which option is better.

Comment author: CarlShulman 05 December 2009 10:26:04AM *  8 points [-]

At the workshop after the Singularity Summit, almost everyone (including Eliezer, Robin, and myself), including all the SIAI people, said they hoped that uploads would be developed before AGI. The only folk who took the other position were those actively working on AGI (but not FAI) themselves.

Also, people at SIAI and FHI are working on papers on strategies for safer upload deployment.

Comment author: Jordan 06 December 2009 06:54:45AM 2 points [-]

Interesting, thanks for sharing that. I take it then that it was generally agreed that the time frame for FAI was probably substantially shorter than for uploads?

Comment author: CarlShulman 06 December 2009 10:43:08AM 1 point [-]

Separate (as well as overlapping) inputs go into de novo AI and brain emulation, giving two distinct probability distributions. AI development seems more uncertain, so that we should assign substantial probability to it coming before or after brain emulation. If AI comes first/turns out to be easier, then FAI-type safety measures will be extremely important, with less time to prepare, giving research into AI risks very high value.

If brain emulations come first, then shaping the upload transition to improve the odds of solving collective action problems like regulating risky AI development looks relatively promising. Incidentally, however, a lot of useful and as yet unpublished analysis (e.g. implications of digital intelligences that can be copied and run at high speed) is applicable to thinking about both emulation and de novo AI.

Comment author: timtyler 09 December 2009 11:46:27PM *  0 points [-]

re: "almost everyone [...] said they hoped that uploads would be developed before AGI"

IMO, that explains much of the interest in uploads: wishful thinking.

Comment author: gwern 10 December 2009 12:20:53AM 5 points [-]

Reminds me of Kevin Kelly's The Maes-Garreau Point:

"Nonetheless, her colleagues really, seriously expected this bridge to immortality to appear soon. How soon? Well, curiously, the dates they predicted for the Singularity seem to cluster right before the years they were expected to die. Isn’t that a coincidence?"

Possibly the most single disturbing bias-related essay I've read, because I realized as I was reading it that my own uploading prediction was very close to my expected lifespan (based on my family history) - only 10 or 20 years past my death. It surprises me sometimes that no one else on LW/OB seems to've heard of Kelly's Maes-Garreau Point.

Comment author: Vladimir_Nesov 05 December 2009 01:16:30PM 2 points [-]

I tentatively agree, there well may be a way to FAI that doesn't involve normal humans understanding intelligence, but rather improved humans understanding intelligence, for example carefully modified uploads or genetically engineered/selected smarter humans.

Comment author: wedrifid 05 December 2009 03:06:35AM 2 points [-]

Agreed, uploads aren't provably friendly. But you have to weigh that danger against the danger of AGI arriving before FAI.

I rather suspect uploads would arrive at AGI before their more limited human counterparts. Although I suppose uploading only the right people could theoretically increase the chances of FAI coming first.

Comment author: whpearson 04 December 2009 11:34:38AM 0 points [-]

Okay, let us say you want to make a test for intelligence, just as there was a test for the lift generated by a fixed wing.

As you are testing a computational system there are two things you can look at, the input-output relation and the dynamics of the internal system.

Looking purely at the IO relation is not informative, they can be fooled by GLUTs or compressed versions of the same. This is why the loebner prize has not lead to real AI in general. And making a system that can solve a single problem that we consider requires intelligence (such as chess), just gets you a system that can solve chess and does not generalize.

Contrast this with the air tunnels that the wright brothers had, they could test for lift which they knew would keep them up

If you want to get into the dynamics of the internals of the system they are divorced from our folk idea of intelligence which is problem solving (unlike the folk theory of flight, which connects nicely with lift from a wing). So what sort of dynamics should we look for?

If the theory of intelligence is correct the dynamics will have to be found in the human brain. Despite the slowness and difficulties of analysing it it. we are generating more data which we should be able to use to narrow down the dynamics.

How would you go about creating a testable theory of intelligence? Preferably without having to build a many person-year project each time you want to test your theory.

Comment author: [deleted] 03 December 2009 02:56:53AM 2 points [-]

If a wrong idea is both simple and consistent with everything you know, it cannot be winnowed out. You have to either find something simpler or find an inconsistency.

Comment author: LauraABJ 01 December 2009 05:07:47PM 5 points [-]

When you say 'rotating,' what time frame do you have in mind? A month? A year? Are there set sessions, like the summer program, or are they basically whenever someone wants to show up?

Comment author: AnnaSalamon 01 December 2009 08:26:06PM 5 points [-]

Initial stints can be anywhere from three weeks to three months, depending on the individual's availability, on the projects planned, and on space and other constraints on this end. There are no set sessions through most of the year, but we may try to have a coordinated start time for a number of individuals this summer.

Still, non-summer is better for any applicants whose schedules are flexible; we'll have fewer new folks here, and so individual visiting fellows will get more attention from experienced folks.

Comment author: alyssavance 01 December 2009 01:48:26AM 4 points [-]

"Working with this crowd transformed my world; it felt like I was learning to think. I wouldn’t be surprised if it can transform yours."

I was there during the summer of 2008 and 2009, and I wholeheartedly agree with this.

Comment author: alyssavance 01 December 2009 01:59:01AM 3 points [-]

"Improving the LW wiki, and/or writing good LW posts;"

Does anyone have data on how many people actually use the LW wiki? If few people use it, then we should find out why and solve it; if it cannot be solved, we should avoid wasting further time on it. If many people use it, of course, we should ask for their comments on what could be improved.

Comment author: Vladimir_Nesov 01 December 2009 06:21:37PM 3 points [-]
Comment author: mormon2 01 December 2009 06:06:38PM 8 points [-]

Is it just me or does this seem a bit backwards? SIAI is trying to make FAI yet so much of the time spent is spent on risks and benefits of this FAI that doesn't exist. For a task that is estimated to be so dangerous and so world changing would it not behoove SIAI to be the first to make FAI? If this be the case then I am a bit confused as to the strategy SIAI is employing to accomplish the goal of FAI.

Also if FAI is the primary goal here then it seems to me that one should be looking not at LessWrong but at gathering people from places like Google, Intel, IBM, and DARPA... Why would you choose to pull from a predominantly amateur talent pool like LW (sorry to say that but there it is)?

Comment author: Tyrrell_McAllister 01 December 2009 06:24:06PM 7 points [-]

I think that you answered your own question. One way to develop FAI is to attract talented people such as those at Google, etc. One way to draw such people is to convince them that FAI is worth their time. One way to convince them that FAI is worth their time is to lay out strong arguments for the risks and benefits of FAI.

Comment author: Eliezer_Yudkowsky 02 December 2009 04:34:54AM 4 points [-]

For a task that is estimated to be so dangerous and so world changing would it not behoove SIAI to be the first to make FAI?

That's my end of the problem.

Also if FAI is the primary goal here then it seems to me that one should be looking not at LessWrong but at gathering people from places like Google, Intel, IBM, and DARPA

Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate.

Most people who consider this problem do not realize the degree to which it is sheerly impossible to put up a job ad for the skills you need. LW itself is probably as close as it gets.

Comment author: alexflint 02 December 2009 12:33:07PM 5 points [-]

Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate.

I'm not sure the olympiads are such a uniquely optimal selector. For sure there were lots of superstars at the IOI, but now doing a phd makes me realise that many of those small-scale problem solving skills don't necessarily transfer to broader-scale AI research (putting together a body of work, seeing analogies between different theories, predicting which research direction will be most fruitful). Equally I met a ton of superstars working at Google, and I mean deeply brilliant superstars, not just well-trained professional coders. Google is trying to attract much the same crowd as SIAI, but they have a ton more resources, so insofar as it's possible it makes sense to try to recruit people from Google.

Comment author: AnnaSalamon 02 December 2009 07:27:05PM 4 points [-]

It would be nice if we could get both groups (international olympiads and Google) reading relevant articles, and thinking about rationality and existential risk. Any thoughts here, alexflint or others?

Comment author: alexflint 02 December 2009 09:37:24PM 6 points [-]

Well for the olympiads, each country runs training camp leading up to the actual olympiad and they'd probably be more than happy to have someone from SIAI give a guest lecture. These kids would easily pick up the whole problem from a half hour talk.

Google also has guest speakers and someone from SIAI could certainly go along and give a talk. It's a much more difficult nut to crack as Google has a somewhat insular culture and they're constantly dealing with overblown hype so many may tune out as soon as something that sounds too "futuristic" comes up.

What do you think?

Comment author: AnnaSalamon 02 December 2009 09:44:09PM *  3 points [-]

Yes, those seem worth doing.

Re: the national olympiad training camps, my guess is that it is easier to talk if an alumnus of the program recommends us. We know alumni of the US math olympiad camp, and the US computing olympiad camp, but to my knowledge we don't know alumni from any of the other countries or from other subjects. Do you have connections there, Alex? Anyone else?

Comment author: Kevin 07 March 2010 09:08:09AM *  2 points [-]

What about reaching out to people who scored very highly when taking the SATs as 7th graders? Duke sells the names and info of the test-takers to those that can provide "a unique educational opportunity."

http://www.tip.duke.edu/talent_searches/faqs/grade_7.html#release

Comment author: alexflint 03 December 2009 08:51:02AM 1 point [-]

Sure, but only in Australia I'm afraid :). If there's anyone from SIAI in that part of the world then I'm happy to put them in contact.

Comment author: Jack 02 December 2009 01:08:06PM 2 points [-]

Thinking about this point is leading me to conclude that Google is substantially more likely than SIAI to develop a General AI before anyone else. Gintelligence anyone?

Comment author: alexflint 02 December 2009 05:10:43PM 1 point [-]

Well, I don't think Google is working on GAI explicitly (though I wouldn't know), and I think they're not working on it for much the same reason that most research labs aren't working on it: it's difficult, risky research, outside the mainstream dogma, and most people don't put very much thought into the implications.

Comment author: Jack 02 December 2009 07:04:49PM *  4 points [-]

I think the conjunction of the probability that (1) Google decides to start working on it AND the probability that Google can (2) put together a team that could develop an AGI AND the probability that (3) that team succeeds might be higher than the probability of (2) and (3) for SIAI/Eliezer.

(1) Is pretty high because Google gets its pick of the most talented young programmers and gives them a remarkable amount of freedom to pursue their own interests. Especially if interest in AI increases it wouldn't be surprising if a lot of people with an interest in AGI ended up working there. I bet a fair number already do.

2/3 are high because Google's resources, their brand/reputation and the fact that they've shown they are capable of completing and deploying innovative code and business ideas.

All of the above is said with very low confidence.

Of course Gintelligence might include censoring the internet for the Chinese government as part of its goal architecture and we'd all be screwed.

Edit: I knew this would get downvoted :-)... or not.

Comment author: wedrifid 03 December 2009 03:05:51AM 1 point [-]

Edit: I knew this would get downvoted :-)

I voted up. I think you may be mistaken but you are looking at relevant calculations.

Of course Gintelligence might include censoring the internet for the Chinese government as part of its goal architecture and we'd all be screwed.

Nice.

Comment author: alexflint 02 December 2009 09:00:02PM 0 points [-]

Fair point. I actually rate (1) quite low just because there are so few people that think along the lines of AGI as an immediate problem to be solved. Tenured professors, for example, have a very high degree of freedom, yet very few of them chose to pursue AGI in comparison to the manpower dedicated to other AI fields. Amongst Googlers there is presumably also a very small fraction of folks potentially willing to tackle AGI head-on.

Comment author: Vladimir_Nesov 02 December 2009 11:27:06AM 2 points [-]

Only if you can expect to manage to get a supply of these folks. On the absolute scale, assuming that level of ability X is absolutely necessary to make meaningful progress (where X is relative to current human population) seems as arbitrary as assuming that human intelligence is exactly the greatest possible level of intelligence theoretically possible. FAI still has a lot of low-hanging fruit, simply because the problem was never seriously considered in this framing.

Comment author: mormon2 02 December 2009 07:07:52AM 5 points [-]

"That's my end of the problem."

Ok, so where are you in the process? Where is the math for TDT? Where is the updated version of LOGI?

"Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate."

So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?

"Most people who consider this problem do not realize the degree to which it is sheerly impossible to put up a job ad for the skills you need. LW itself is probably as close as it gets."

If thats the case why does Ben Goertzel have a company working on AGI the very problem your trying to solve? Why does he actually have design and some portions implemented and you do not have any portions implemented? What about all the other AGI work being done like LIDA, SOAR, and what ever Peter Voss calls his AGI project, so are all of those just misguided since I would imagine they hire the people who work on the projects?

Just an aside for some posters above this post who have been talking about Java as the superior choice to C++ what planet do you come from? Java is slower then C++ because of all the overheads of running the code. You are much better off with C++ or Ct or some other language like that without all the overheads esp. since one can use OpenCL or CUDA to take advantage of the GPU for more computing power.

Comment author: Eliezer_Yudkowsky 02 December 2009 07:42:30AM *  10 points [-]

Goertzel, Voss, and similar folks are not working on the FAI problem. They're working on the AGI problem. Contrary to what Goertzel, Voss, and similar folks find most convenient to believe, these two problems are not on the same planet or even in the same galaxy.

I shall also be quite surprised if Goertzel's or Voss's project yields AGI. Code is easy. Code that is actually generally intelligent is hard. Step One is knowing which code to write. It's futile to go on to Step Two until finishing Step One. If anyone tries to tell you otherwise, bear in mind that the advice to rush ahead and write code has told quite a lot of people that they don't in fact know which code to write, but has not actually produced anyone who does know which code to write. I know I can't sit down and write an FAI at this time; I don't need to spend five years writing code in order to collapse my pride.

The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects - once your "mysterious answer to mysterious question" detector is initialized and switched on, and so on - so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive.

Comment author: mormon2 03 December 2009 02:25:14AM 2 points [-]

Ok opinions on the relative merits of the AGI projects mentioned aside you did not answer my first and the question. The question I am actually most interested in the answer too which is where is the technical work? I was looking for some detail as to what part of step one you are working on? So if TDT is important to your FAI then how is the math coming? Are you updating LOGI or are you discarding it and doing it all over?

"The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects - once your "mysterious answer to mysterious question" detector is initialized and switched on, and so on - so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive."

Ok, this being said where is your design? This reminds me of a movement in physics that wants to discard GR because it fails to explain some phenomena and is part of the rift in physics. Of course these people have nothing to replace GR with so the fact that you can argue that GR is not completely right is a bit pointless until you have something to replace it with, GR not being totally wrong. That being said how is your dismissal of the rest of AGI any better then that?

Its easy enough to sit back with no formal theories or in progress AGI code out for public review and say all these other AGI projects won't work. Even if that is the case it begs the question where are your contributions, your code, and published papers etc.? Without your formal working being out for public review is it really fair to make statements that all the current AGI projects are wrong-headed essentially?

"So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?"

So I take it from the fact that you didn't answer the question that you have in fact not worked for Intel or DARPA etc. That being said I think a measure of humility is an order before you categorically dismiss them as being minor players in FAI. Sorry if that sounds harsh but there it is (I prefer to be blunt because it leaves no room for interpretation).

Comment author: Nick_Tarleton 03 December 2009 03:13:17AM 4 points [-]

Without your formal working being out for public review is it really fair to make statements that all the current AGI projects are wrong-headed essentially?

Truth-seeking is not about fairness.

Comment author: wedrifid 03 December 2009 02:47:10AM 3 points [-]

Sorry if that sounds harsh but there it is (I prefer to be blunt because it leaves no room for interpretation).

Really, we get it. We don't have automated signatures on this system but we can all pretend that this is included in yours. All this serves is to create a jarring discord between the quality of your claims and your presumption of status.

Comment author: Vladimir_Nesov 03 December 2009 09:26:13AM 1 point [-]

Its easy enough to sit back with no formal theories or in progress AGI code out for public review and say all these other AGI projects won't work.

The hypothesis is that yes, they won't work as steps towards FAI. Worse, they might actually backfire. And FAI progress is not as "impressive". What do you expect should be done, given this conclusion? Continue running to the abyss, just for the sake of preserving appearance of productivity?

Comment author: wedrifid 03 December 2009 02:48:04AM *  1 point [-]

Ok, this being said where is your design? This reminds me of a movement in physics that wants to discard GR because it fails to explain some phenomena and is part of the rift in physics. Of course these people have nothing to replace GR with so the fact that you can argue that GR is not completely right is a bit pointless until you have something to replace it with, GR not being totally wrong. That being said how is your dismissal of the rest of AGI any better then that?

For this analogy to hold there would need to be an existing complete theory of AGI.

(There would also need to be something in the theory or proposed application analogous to "hey! We should make a black hole just outside our solar system because black holes are like way cool and powerful and stuff!")

Ok opinions on the relative merits of the AGI projects mentioned aside you did not answer my first and the question. The question I am actually most interested in the answer too which is where is the technical work? I was looking for some detail as to what part of step one you are working on? So if TDT is important to your FAI then how is the math coming? Are you updating LOGI or are you discarding it and doing it all over?

These are good questions. Particularly the TDT one. Even if the answer happened to be "not that important".

Comment author: Eliezer_Yudkowsky 03 December 2009 05:34:22AM 3 points [-]

I was working on something related to TDT this summer, can't be more specific than that. If I get any of the remaining problems in TDT nailed down beyond what was already presented, and it's not classified, I'll let y'all know. Writing up the math I've already mentioned with impressive Greek symbols so it can be published is lower priority than the rationality book.

LOGI's out the window, of course, as anyone who's read the arc of LW could very easily guess.

Comment author: anonym 03 December 2009 04:20:40PM 11 points [-]

Writing up the math I've already mentioned with impressive Greek symbols so it can be published is lower priority than the rationality book.

I'm curious to know your reasoning behind this, if you can share it.

It seems to me that the publication of some high-quality technical papers would increase the chances of attracting and keeping the attention of one-in-a-million people like this much more than a rationality book would.

Comment author: wedrifid 03 December 2009 06:00:24AM 0 points [-]

Thanks for the update. Hopefully one of the kids you invite to visit has a knack for translating into impressive and you can delegate.

Comment author: [deleted] 05 December 2009 07:46:41AM 1 point [-]

Goertzel, Voss, and similar folks are not working on the FAI problem. They're working on the AGI problem. Contrary to what Goertzel, Voss, and similar folks find most convenient to believe, these two problems are not on the same planet or even in the same galaxy.

No? I've been thinking of both problems as essentially problems of rationality. Once you have a sufficiently rational system, you have a Friendliness-capable, proto-intelligent system.

And it happens that I have a copy of "Do the Right Thing: Studies in Limited Rationality", but I'm not reading it, even though I feel like it will solve my entire problem perfectly. I wonder why this is.

Comment author: [deleted] 03 December 2009 03:20:15AM 3 points [-]

. . . Java is slower then C++ because of all the overheads of running the code. . . .

A fast programming language is the last thing we need. Literally--when you're trying to create a Friendly AI, compiling it and optimizing it and stuff is probably the very last step.

(Yes, I did try to phrase the latter half of that in such a way to make the former half seem true, for the sake of rhetoric.)

Comment author: Vladimir_Nesov 02 December 2009 11:32:17AM 3 points [-]

If that's the case why does Ben Goertzel have a company working on AGI the very problem your trying to solve? Why does he actually have design and some portions implemented and you do not have any portions implemented?

He is solving a wrong problem (i.e. he is working towards destroying the world), but that's completely tangential.

Comment author: timtyler 09 December 2009 02:59:33PM 1 point [-]

It seems like FUD. I doubt Ben Goertzel is working towards destroying the world. It seems much more likely that the whole idea is a paranoid hallucination.

Comment author: DanArmak 03 December 2009 11:41:47AM 1 point [-]

Java is slower then C++ because of all the overheads of running the code.

Those damnable overheads. Assembly language FTW!

</sarcasm>

Comment deleted 02 December 2009 08:07:47AM [-]
Comment author: Eliezer_Yudkowsky 02 December 2009 09:13:23AM 2 points [-]

Oh, hell yeah. Anna's side can recruit them, no problem. And I'm certainly not saying that no one who works at these organizations could make the cut for the Final Programmers. Just that you can't hire Final Programmers at random from anywhere, not even Google.

Comment author: zero_call 03 December 2009 11:42:02PM 2 points [-]

Does SIAI have subscription access to scientific journals?

Comment author: AnnaSalamon 03 December 2009 11:53:28PM 2 points [-]

Yes

Comment author: zero_call 04 December 2009 12:03:16AM *  2 points [-]

Request for elaboration.... Is this at the scale of a university library or is there only access for a few select journals, etc? This stuff is expensive... I would be somewhat impressed if SIAI had full access, comparable to a research university. Also, I would be curious as to what part of your budget must be dedicated just to this information access? (Although I guess I could understand if this information is private.)

Comment author: AnnaSalamon 04 December 2009 03:48:03AM *  2 points [-]

In practice, enough of us retain online library access through our former universities that we can reach the articles we need reasonably easily. Almost everything is online.

If this ceases to be the case, we'll probably buy library privileges through Stanford, San Jose State, or another nearby university.

Comment author: Tyrrell_McAllister 04 December 2009 04:02:06AM *  2 points [-]

Do you mean that only those individuals who have UC logins have access to the online journals (JSTOR, etc.)? That would mean that you retain those privileges for only as long as the UC maintains your account. In my experience, that isn't forever.

ETA: I have to correct myself, here. They terminated my e-mail account, but I just discovered that I can still log into some UC servers and access journals through them.

Comment author: Morendil 01 December 2009 11:50:21AM 3 points [-]

Who's Peter Platzer ?

Comment deleted 01 December 2009 08:40:19PM [-]
Comment author: AnnaSalamon 01 December 2009 08:56:18PM 4 points [-]

As well as a trial conference grants project to make it easier for folks to present AI risks work at conferences (the grants cover conference fees).

He's also offering useful job coaching services for folks interested in reducing existential risk by earning and donating. We increasingly have a useful, though informal, career network for folks interested in earning and donating; anyone who is interested in this should let me know. I believe Frank Adamek will be formalizing it shortly.

Comment author: Aleksei_Riikonen 02 December 2009 01:28:19AM 3 points [-]

"We increasingly have a useful, though informal, career network for folks interested in earning and donating"

I wonder if people will soon start applying to this SIAI program just as a means to get into the Silicon Valley job market?

Not that it would be a negative thing (great instead!), if they anyway are useful while in the program and perhaps even value the network they receive enough to donate to SIAI simply for keeping it...

Comment author: Jordan 02 December 2009 04:37:51AM 6 points [-]

I wonder if people will soon start applying to this SIAI program just as a means to get into the Silicon Valley job market?

This brings up an interesting point in my mind. There are so many smart people surrounding the discussion of existential risk that there must be a better way for them to cohesively raise money than just asking for donations. Starting an 'inner circle' to help people land high paying jobs is a start, but maybe it could be taken to the next level. What if we actively funded startups, ala Y Combinator, but in a more selective fashion, really picking out the brightest stars?

Comment author: CarlShulman 02 December 2009 07:32:13AM 4 points [-]

You have to work out internal rates of return for both sorts of project, taking into account available data, overconfidence and other biases, etc. If you spend $50,000 on VC investments, what annual return do you expect? 30% return on investment, up there with the greatest VCs around? Then consider research or other projects (like the Singularity Summit) that could mobilize additional brainpower and financial resources to work on the problem. How plausible is it that you can get a return of more than 50% there?

There is a reasonably efficient capital market, but there isn't an efficient charitable market. However, on the entrepreneurship front, check out Rolf Nelson.

Comment author: Jordan 02 December 2009 01:36:50AM 5 points [-]

... making the Singularity seem even more like a cult. We'll help you get a good job! Just make sure to tithe 10% of your income.

I'm totally OK with this though.

Comment author: AnnaSalamon 02 December 2009 02:45:00AM 5 points [-]

Does it help that the "tithe 10% of your income" is to an effect in the world (existential risk reduction) rather than to a specific organization (SIAI)? FHI contributions, or the effective building of new projects, are totally allied.

Comment author: Jordan 02 December 2009 04:33:23AM 3 points [-]

I'm OK with donating to SIAI in particular, even if the single existential risk my funding went towards is preventing run away AIs. What makes the biggest difference for me is having met some of the people, having read some of their writing, and in general believing that they are substantially more dedicated to solving a problem than to just preserving an organization set up to solve that problem.

Comment author: alyssavance 02 December 2009 05:25:10AM 2 points [-]

The Catholic Church asks that you tithe 10% of your income, and it's not even a quid pro quo.

Comment author: Aleksei_Riikonen 02 December 2009 01:57:32AM 1 point [-]

Yes, there's that.

In these comparisons it's good to remember, though, that all the most respected universities also value their alumni donating to the university.

Comment author: FeministX 02 December 2009 04:49:32AM 1 point [-]

Hmm. Maybe I should apply...

Comment author: AnnaSalamon 02 December 2009 07:37:27PM *  6 points [-]

You should apply. I liked my 90 second skim of your blog just now, and also, everyone who thinks they should maybe apply, should apply.

Comment author: Daniel_Burfoot 02 December 2009 03:41:27PM 0 points [-]

What kind of leeway are the fellows given in pursuing their own projects? I have an AI project I am planning to work on after I finish my Phd; it would be fun to do it at SIAI, as opposed to my father's basement.

Comment author: AnnaSalamon 02 December 2009 07:41:11PM 5 points [-]

Less leeway than that. We want fellows, and others, to do whatever most effectively reduces existential risk... and this is unlikely to be a pre-existing AI project that someone is attached to.

Although we do try to work with individual talents and enthusiasms, and group brainstorming processes to create many of the projects we work on.

Comment author: ciphergoth 02 December 2009 09:02:11AM 0 points [-]

So just to make sure I understand correctly: successful applicants will spend a month with the SIAI in the Bay Area. Board and airfare are paid but no salary can be offered.

I may not be the sort of person you're looking for, but taking a month off work with no salary would be difficult for me to manage. No criticism of the SIAI intended, who are trying to achieve the best outcomes with limited funds.

Comment author: AnnaSalamon 02 December 2009 09:09:54AM 4 points [-]

That's right. Successful applicants will spend three weeks to three months working with us here, with living and transit expenses paid but with no salary. Some who prove useful will be invited to stay long-term, at which point stipends can be managed; visiting fellow stints are for exploring possibilities, building relationships, and getting some risk reducing projects done.

If it makes you feel any better, think of it as getting the most existential risk reduction we can get for humanity, and as much bang as possible per donor buck.

Apart from questions of salary, are you the sort we're looking for, ciphergoth?

Comment author: ciphergoth 03 December 2009 09:09:28AM 0 points [-]

Possibly - I've published papers, organised events, my grasp of mathematics and philosophy is pretty good for an amateur, and I work as a programmer. But unfortunately I have a mortgage to pay :-( Again, no criticism of SIAI, who as you say must get the most bang per buck.

Comment author: Matt_Simpson 02 December 2009 06:00:01AM 0 points [-]

How long will this opportunity be available? I'm very interested, but I probably won't have a large enough block of free time for a year and a half.

Comment author: AnnaSalamon 02 December 2009 07:33:11AM 1 point [-]

We will probably be doing this at that time as well.

Still, we're getting high rates of return lately on money and on time, which suggests that if existential risk reduction is your aim, sooner is better than later. What are your aims and your current plans?

Comment author: Matt_Simpson 02 December 2009 04:27:53PM 0 points [-]

At the moment I'm beginning my Ph.D. in statistics at Iowa State, so the school year is pretty much filled with classes - at least until I reach the dissertation stage. That leaves summers. I'm not completely sure what I'll be doing this summer, but I'm about 90% sure I'll be taking a summer class to brush up on some math so I'm ready for measure theory in the fall. If the timing of that class is bad, I may not have more than a contiguous week free over the summer. Next summer I expect to have more disposable time.

Comment author: Johnicholas 01 December 2009 08:31:18PM 0 points [-]

Logistics question: Is the cost to SIAI approximately 1k per month? (aside from the limited number of slots, which is harder to quantify)

Comment author: AnnaSalamon 01 December 2009 08:56:40PM *  2 points [-]

Yep. Living expenses are moderately lower than $1k per visiting fellow per month, but given international air fair and such, $1k per person per month is a good estimate.

Why do you ask?