Eugine_Nier comments on Siren worlds and the perils of over-optimised search - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (411)
Of course we have, it's called AIXI. Do I need to download a Monte Carlo implementation from Github and run it on a university server with environmental access to the entire machine and show logs of the damn thing misbehaving itself to convince you?
AIs can be causally boxed, just like anything else. That is, as long as the agent's environment absolutely follows causal rules without any exception that would leak information about the outside world into the environment, the agent will never infer the existence of a world outside its "box".
But then it's also not much use for anything besides Pac-Man.
Given how slow and dumb it is, I have a hard time seeing an approximation to AIXI as a threat to anyone, except maybe itself.
True, but that's an issue of raw compute-power, rather than some innate Friendliness of the algorithm.
It would still be useful to have an example, of innate unfriendliness, rather than " it doesn't really run or do anything"
Not just raw compute-power. An approximation to AIXI is likely to drop a rock on itself just to see what happens long before it figure out enough to be dangerous.
Dangerous as in, capable of destroying human lives? Yeah, probably. Dangerous as in, likely to cause some minor property damage, maybe overwrite some files someone cared about? It should reach that level.