Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

alexflint comments on Size of the smallest recursively self-improving AI? - Less Wrong

4 Post author: alexflint 30 March 2011 11:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: alexflint 31 March 2011 07:24:27PM *  2 points [-]

Yeah it's a terrible definition. I think the AI-FOOM debate provides a reasonable grounding for the term "FOOM", though I agree that it's important to have a concise definition at hand.

In the post, I used FOOM to mean an optimization process optimizing itself in an open-ended way.[1] I assumed that this corresponded to other people's understanding of FOOM, but I'm happy to be corrected.

I would use the term "singularity" to refer more generally to periods of rapid progress, so e.g. I'd be comfortable saying that FOOM is one kind of process that could lead to a singularity, though not exclusively so. Does this match with the common understanding of these terms?

[1] Perhaps that last "open-ended" clause just re-captures all the mystery, but it seems necessary to exclude examples like a compiler making itself faster but then making no further improvements.

Comment author: Giles 02 April 2011 03:24:32PM *  0 points [-]

My understanding of the FOOM process:

  • An AI is developed to optimise some utility function or solve a particular problem.
  • It decides that the best way to go about this is to build another, better AI to solve the problem for it.
  • The nature of the problem is such that the best course of action for an agent of any conceivable level of intelligence is to first build a more intelligent AI.
  • The process continues until we reach an AI of an inconceivable level of intelligence.