Posts

Sorted by New

Wiki Contributions

Comments

He might have changed his mind till now, but in case you missed it: Recommended Reading for Friendly AI Research

This idea probably just comes from looking at the Blue Brain project that seems to be aiming in the direction of WBE and uses an expensive supercomputer for simulating models of neocortical columns... right, Luke? :)

(I guess because we'd like to see WBE come before AI, due to creating FAI being a hell of a lot more difficult than ensuring a few (hundred) WBEs behave at least humanly friendly and thereby be of some use in making progress on FAI itself.)

Perhaps the following review article can be of some help here: A Philosophical Treatise of Universal Induction

Time to level-up then, eh? :)

(Just sticking to my plan of trying to encourage people for this kind of work.)

Before resorting to 'large financial prizes', shouldn't level 1 include 'formalize open problems and publicise them'?

The trouble is, 'formalizing open problems' seems like by far the toughest part here, and it would thus be nice if we could employ collaborative problem-solving to somehow crack this part of the problem... by formalizing how to formalize various confusing FAI-related subproblems and throwing this on MathOverflow? :) Actually, I think LW is more appropriate environment for at least attempting this endeavor, since it is, after all, what a large part of Eliezer's sequences tried to prepare us for...

...open problems you intend to work on.

You mean we? :)

...and we can start by trying to make a list like this, which is actually a pretty hard and important problem all by itself.

I wonder if anyone here shares my hesitation to donate (only a small amount, since I unfortunately can't afford anything bigger) due to thinking along the lines of "let's see, if I donate a 100$, that may buy a few meals in the States, especially CA, but on the other hand, if I keep them, I can live ~2/3 of a month on that and since I also (aspire to) work on FAI-related issues, isn't this a better way to spend the little money I have?"

But anyway, since even the smallest donations matter (tax laws an' all that, if I'm not mistaken) and -5$ isn't going to kill me, I've just made this tiny donation...

What is the difference between the ideas of recursive self-improvement and intelligence explosion?

They sometimes get used interchangeably, but I'm not sure they actually refer to the same thing. It wouldn't hurt if you could clarify this somewhere, I guess.

How about a LW poll regarding this issue?

(Is there some new way to make one, since the site redesign, or are we still at vote-up-down-karma-balance pattern?)

Even if we presume to know how to build an AI, figuring out the Friendly part still seems to be a long way off. Some AI building plans or/and architectures (e.g. evolutionary methods) are also totally useless F-wise, even though they may lead to a general AI.

What we actually need is knowledge about how to build a very specific type of an AI, and unfortunately, it appears that the A(G)I (sub)field with it's "anything that works" attitude isn't going to provide one.

Load More