You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

What would an Incandescence about FAI look like?

1 Post author: VNKKET 01 May 2011 08:30PM

This post spoils Greg Egan's Incandescence.

Incandescence is a success story about some people who notice an existential threat and avoid it using science and engineering.  We see them figure out how gravity works, which is more interesting than it might sound, partly because their everyday experiences are full of gravitational effects that we don't notice on Earth.  At first they do science out of pure curiosity, but it turns into an urgent collective action problem when they discover that their orbit will lead them towards all sorts of disasters, including falling into a black hole.  The solution, it turns out, is to move some dirt around.

Has anyone considered writing a success story about using Friendly AI to solve an existential threat?

Comments (2)

Comment author: anonynamja 02 May 2011 03:20:10PM 1 point [-]

MOPI/Revelation passage come to mind.

Comment author: MrMind 02 May 2011 01:23:58PM 0 points [-]

In all the stories I read about an AI dystopia, the solution proposed is to kill it. From Disney to the Lawnmower movie to Rucker's Postsingular etc. While we know what General Relativity looks like, and so we can develop the story of a civilization which happens to discover it, we still have little clue to what a FAI would look like, and I think we shouldn't burden a poor writer to discover the theory before writing a novel... From here a writer has two choices: uses FAI (we can imagine how it looks) to solve some other existential risk, or concentrate the UAI existential risk to some subset where the Friendly part is solvable but not obvious. I think I'll ponder the last track for a while...