Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Zetetic 27 September 2011 01:48:29AM 5 points [-]

I would really, really like to know: What areas of pure mathematics stand out to you now?

Comment author: aletheilia 28 September 2011 10:02:49AM 0 points [-]

He might have changed his mind till now, but in case you missed it: Recommended Reading for Friendly AI Research

Comment author: Vladimir_Nesov 15 September 2011 03:30:05PM 1 point [-]

What kinds of expensive computations would you need running in order to make progress on WBE? (As a separate issue, why should you want to make progress on WBE?)

Comment author: aletheilia 19 September 2011 10:19:00AM 1 point [-]

This idea probably just comes from looking at the Blue Brain project that seems to be aiming in the direction of WBE and uses an expensive supercomputer for simulating models of neocortical columns... right, Luke? :)

(I guess because we'd like to see WBE come before AI, due to creating FAI being a hell of a lot more difficult than ensuring a few (hundred) WBEs behave at least humanly friendly and thereby be of some use in making progress on FAI itself.)

Comment author: [deleted] 09 September 2011 04:35:22PM *  2 points [-]

I really wish there were a post that was more advanced than this one but more intuitive than Shane Legg's explanation. This post doesn't really convey a full technichal understanding of how to actually apply Solomonoff Induction or MML, and Legg's version just shows a succinct derivation.

In response to comment by [deleted] on [SEQ RERUN] Occam's Razor
Comment author: aletheilia 14 September 2011 08:52:36AM 0 points [-]

Perhaps the following review article can be of some help here: A Philosophical Treatise of Universal Induction

Comment author: Normal_Anomaly 27 August 2011 02:36:18PM 3 points [-]

I said "you" because I don't see myself as competent to work on decision theory-type problems.

Comment author: aletheilia 28 August 2011 03:42:08PM 1 point [-]

Time to level-up then, eh? :)

(Just sticking to my plan of trying to encourage people for this kind of work.)

Comment author: Alexandros 27 August 2011 11:59:19AM *  13 points [-]

Thank you so much for doing this. It makes a very big difference.

Some comments:

Strategy #1, Point 2e seems to cover things that should be either in point 3 or 4. Also points 3 and 4 seem to bleed into each other

If the Rationality training is being spun off to allow Singinst to focus on FAI, why isn't the same done with the Singularity summit? The slightly-bad faith interpretation for the lack of explanation would be that retaining the training arm has internal opposition while the summit does not. If this is not an inference you like, this should be addressed.

The level 2 plan includes " Offer large financial prizes for solving important problems related to our core mission". I remember cousin_it mentioning that he's had very good success asking for answers in communities like MathOverflow, but the main cost was in formalizing the problems. It seems intuitive that geeks are not too much motivated by cash, but are very much motivated by a delicious open problem (and the status solving it brings). Before resorting to 'large financial prizes', shouldn't level 1 include 'formalize open problems and publicise them'?.

Thank you again for publishing a document so that this discussion can be had.

Comment author: aletheilia 28 August 2011 03:37:15PM 1 point [-]

Before resorting to 'large financial prizes', shouldn't level 1 include 'formalize open problems and publicise them'?

The trouble is, 'formalizing open problems' seems like by far the toughest part here, and it would thus be nice if we could employ collaborative problem-solving to somehow crack this part of the problem... by formalizing how to formalize various confusing FAI-related subproblems and throwing this on MathOverflow? :) Actually, I think LW is more appropriate environment for at least attempting this endeavor, since it is, after all, what a large part of Eliezer's sequences tried to prepare us for...

Comment author: Normal_Anomaly 27 August 2011 03:12:34AM 1 point [-]

I'm glad that you've done this! I look forward to seeing the list of open problems you intend to work on.

Comment author: aletheilia 27 August 2011 09:35:26AM 0 points [-]

...open problems you intend to work on.

You mean we? :)

...and we can start by trying to make a list like this, which is actually a pretty hard and important problem all by itself.

Comment author: aletheilia 26 August 2011 09:42:23PM 7 points [-]

I wonder if anyone here shares my hesitation to donate (only a small amount, since I unfortunately can't afford anything bigger) due to thinking along the lines of "let's see, if I donate a 100$, that may buy a few meals in the States, especially CA, but on the other hand, if I keep them, I can live ~2/3 of a month on that and since I also (aspire to) work on FAI-related issues, isn't this a better way to spend the little money I have?"

But anyway, since even the smallest donations matter (tax laws an' all that, if I'm not mistaken) and -5$ isn't going to kill me, I've just made this tiny donation...

Comment author: aletheilia 09 August 2011 11:52:20PM 2 points [-]

What is the difference between the ideas of recursive self-improvement and intelligence explosion?

They sometimes get used interchangeably, but I'm not sure they actually refer to the same thing. It wouldn't hurt if you could clarify this somewhere, I guess.

Comment author: aletheilia 06 July 2011 08:13:46AM 2 points [-]

How about a LW poll regarding this issue?

(Is there some new way to make one, since the site redesign, or are we still at vote-up-down-karma-balance pattern?)

In response to comment by [deleted] on SIAI’s Short-Term Research Program
Comment author: lukeprog 24 June 2011 09:08:00PM *  8 points [-]

"What is missing for the SIAI to actually start working on friendly AI?"

The biggest problem in designing FAI is that nobody knows how to build AI. If you don't know how to build an AI, it's hard to figure out how to make it friendly. It's like thinking about how to make a computer play chess well before anybody knows how to make a computer.

In the meantime, there's lots of pre-FAI work to be done. There are many unsolved problems in metaethics, decision theory, anthropics, cosmology, and other subjects that seem to be highly relevant to later FAI development. I'm currently working (with others) toward defining those problems so that they can be engaged by the wider academic community.

Comment author: aletheilia 24 June 2011 11:01:18PM 8 points [-]

Even if we presume to know how to build an AI, figuring out the Friendly part still seems to be a long way off. Some AI building plans or/and architectures (e.g. evolutionary methods) are also totally useless F-wise, even though they may lead to a general AI.

What we actually need is knowledge about how to build a very specific type of an AI, and unfortunately, it appears that the A(G)I (sub)field with it's "anything that works" attitude isn't going to provide one.

View more: Next