TheAncientGeek comments on Siren worlds and the perils of over-optimised search - Less Wrong

27 Post author: Stuart_Armstrong 07 April 2014 11:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (411)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 13 May 2014 08:06:54PM *  -1 points [-]

That would be the AIXI that is uncomputable?

And don't AIs get out of boxes by talking their way out, round here?

Comment author: Nornagest 13 May 2014 09:16:57PM *  5 points [-]

That would be the AIXI that is uncomputable?

It's incomputable because the Solomonoff prior is, but you can approximate it -- to arbitrary precision if you've got the processing power, though that's a big "if" -- with statistical methods. Searching Github for the Monte Carlo approximations of AIXI that eli_sennesh mentioned turned up at least a dozen or so before I got bored.

Most of them seem to operate on tightly bounded problems, intelligently enough. I haven't tried running one with fewer constraints (maybe eli has?), but I'd expect it to scribble over anything it could get its little paws on.

Comment author: TheAncientGeek 14 May 2014 09:10:29AM -1 points [-]

But people do run these things that aren't actually AIXIs , and they haven't actually taken over the world, so they aren't actually dangerous.

So there is no actually dangerous actual .AI.

Comment author: CCC 14 May 2014 10:50:37AM 1 point [-]

...it's not dangerous until it actually tries to take over the world?

I can think of plenty of ways in which an AI can be dangerous without taking that step.

Comment author: TheAncientGeek 14 May 2014 11:37:55AM 0 points [-]

The you had better tell people not to download and run AIXI approximation.

Comment author: CCC 16 May 2014 01:01:20PM *  1 point [-]

Any form of AI, not just AIXI approximations. Connect it up to a car, and it can be dangerous in, at minimum, all of the ways that a human driver can be dangerous. Connect it up to a plane, and it can be dangerous in, at minimum, all the ways that a human pilot can be dangerous. Connect it up to any sort of heavy equipment and it can be dangerous in, at minimum, all the ways that a human operator can be dangerous. (And not merely a trained human; an untrained, drunk, or actively malicious human can be dangerous in any of those roles).

I don't think that any of these forms of danger is sufficient to actively stop AI research, but they should be considered for any practical applications.

Comment author: TheAncientGeek 16 May 2014 01:14:59PM *  -1 points [-]

This is the kind of danger XiXiDu talks about...just failure to function ....not the kind EY talks about, which is highly competent execution of unfriendly goals. The two are orthogonal.

Comment author: [deleted] 28 May 2014 06:14:38AM 1 point [-]

The difference between one and the other is just a matter of processing power and training data.

Comment author: Lumifer 13 May 2014 09:08:43PM 1 point [-]

That would be the AIXI that is uncomputable?

Sir Lancelot: Look, my liege!
[trumpets play a fanfare as the camera cuts briefly to the sight of a majestic castle]
King Arthur: [in awe] Camelot!
Sir Galahad: [in awe] Camelot!
Sir Lancelot: [in awe] Camelot!
Patsy: [derisively] It's only a model!
King Arthur: Shh!

:-D