Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

alexflint comments on Size of the smallest recursively self-improving AI? - Less Wrong

4 Post author: alexflint 30 March 2011 11:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: alexflint 31 March 2011 05:40:26PM 1 point [-]

I agree, though I suppose it makes sense if we assume he was actually describing a product of FOOM rather than the process itself.

Comment author: timtyler 31 March 2011 06:19:26PM *  0 points [-]

We should probably scratch that definition - even though it is about the only one provided.

If the term "FOOM" has to be used, it should probably refer to actual rapid progress, not merely to a capability of producing technologies rapidly.

I suppose it makes sense if we assume he was actually describing a product of FOOM rather than the process itself.

Creating molecular nanotechnology may be given as homework in the 29th century - but that's quite a different idea to there being rapid technological progress between now and then. You can attain large capabilities by slow and gradual progress - as well as via a sudden rapid burst.

Comment author: alexflint 31 March 2011 07:24:27PM *  2 points [-]

Yeah it's a terrible definition. I think the AI-FOOM debate provides a reasonable grounding for the term "FOOM", though I agree that it's important to have a concise definition at hand.

In the post, I used FOOM to mean an optimization process optimizing itself in an open-ended way.[1] I assumed that this corresponded to other people's understanding of FOOM, but I'm happy to be corrected.

I would use the term "singularity" to refer more generally to periods of rapid progress, so e.g. I'd be comfortable saying that FOOM is one kind of process that could lead to a singularity, though not exclusively so. Does this match with the common understanding of these terms?

[1] Perhaps that last "open-ended" clause just re-captures all the mystery, but it seems necessary to exclude examples like a compiler making itself faster but then making no further improvements.

Comment author: Giles 02 April 2011 03:24:32PM *  0 points [-]

My understanding of the FOOM process:

  • An AI is developed to optimise some utility function or solve a particular problem.
  • It decides that the best way to go about this is to build another, better AI to solve the problem for it.
  • The nature of the problem is such that the best course of action for an agent of any conceivable level of intelligence is to first build a more intelligent AI.
  • The process continues until we reach an AI of an inconceivable level of intelligence.