D_Malik comments on Open thread, November 2011 - Less Wrong

4 Post author: Oscar_Cunningham 02 November 2011 06:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (209)

You are viewing a single comment's thread.

Comment author: D_Malik 03 November 2011 06:17:42AM -1 points [-]

This probably wouldn't work, but has anyone tried to create strong AI by just running a really long evolution simulation? You could make it faster than our own evolution by increasing the evolutionary pressure for intelligence. Perhaps run this until you get something pretty smart, then stop the sim and try to use that 'pretty smart' thing's code, together with a friendly utility function, to make FAI? The population you evolve could be a group of programs that take a utility function as /input/, then try to maximize it. The programs which suck at maximizing their utility functions are killed off.

How big do you reckon the dumbest AI capable of fooming would be? Has anyone tried just generating random 100k-character brainfuck programs?

Comment author: dlthomas 03 November 2011 04:53:48PM 3 points [-]

Has anyone tried just generating random 100k-character brainfuck programs?

That's an awfully large search space, with highly nonlinear dynamics, a small target, and might still not be enough to encode what we need to encode. I don't see that approach as very likely to work.

Comment author: MixedNuts 03 November 2011 04:30:34PM 3 points [-]

It's unlikely we'd ever generate something smart enough to be worth keeping yet dumb enough not to kill us. Also, where do you get your friendly utility function from?

Comment author: gwern 21 November 2011 04:42:16PM 1 point [-]

Has anyone tried just generating random 100k-character brainfuck programs?

There's no way that is going to work, think of how many possible 100k-characte Brainfuck programs there are. Brainfuck does have the nice characteristic that each program is syntactically valid, but then you have the problem of running them, which is very resource-intensive (you would expect AI to be slow, so you need very large time-outs, which means you test very few programs every time-interval). Speaking of Brainfuck: http://www.vetta.org/2011/11/aiq/