Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

AGirlAlone comments on Protein Reinforcement and DNA Consequentialism - Less Wrong

24 Post author: Eliezer_Yudkowsky 13 November 2007 01:34AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

Sort By: Old

You are viewing a single comment's thread.

Comment author: AGirlAlone 09 February 2012 12:29:30PM 1 point [-]

IMO a fun project (for those like me who like this but are clearly not smart enough to be part of a Singularity developing team): create an object-based environment with maybe rule-based reproductive agents, with customizable explicit world rules (as in a computer game, not as in physics) and let them evolve. Maybe users across the world can add new magical artifacts and watch the creatures fail hilariously...

On a more related note, the post sounds ominous for any hope of a general AI. There may be no clear distinction between protein computer and just protein, between learning and blindly acting. If we and our desires are in such a position, aren't any AI we can make indirectly also blind? Or as I understand, Eliezer seems to think that we can (or had better) bootstrap, both for intelligence/computation and morality. For him, this bootstrapping, this understanding/theory of the generality of our own intelligence (as step one of the bootstrapping) seems to be the Central Truth of Life (tm). Maybe he's right, but for me with less insight into intelligence, that's not self-evident. And he didn't explain this crucial point anywhere clearly, only advocated it. Come on, it's not as if enough people can be a seed AI programmer even armed with that Central Truth. (But who knows.)

Comment author: pnrjulius 07 June 2012 12:43:48AM 1 point [-]

Evolutionary biologists basically do this, without the interactivity. They create and run computer simulations on rule-based reproducing agents.