shminux comments on The Fabric of Real Things - Less Wrong

16 Post author: Eliezer_Yudkowsky 12 October 2012 02:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (305)

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 12 October 2012 07:43:40PM 2 points [-]

In a simulated universe this could be as "simple" as detecting that a certain computation is likely to discover its simulated nature and disallow this computation by altering the inputs.

But mixing "certain computation" and "discover" like that is mixing syntax and semantics - in order to watch out for that occurrence, you'd have to be aware of all possible semantics for a certain computation, to know if it counts as a "discovery".

Comment author: shminux 12 October 2012 08:00:08PM *  1 point [-]

you'd have to be aware of all possible semantics for a certain computation

Not at all. You set up a trigger such as "50% of all AI researchers believe in the simulation argument", then trace back their reasons for believing so and restart from a safe point with less dangerous inputs.

Comment author: thomblake 12 October 2012 08:05:33PM 1 point [-]

You set up a trigger such as "50% of all AI researchers believe in the simulation argument"

If your simulation has beliefs as a primitive, then you can set up that sort of trigger - but then it's not a universe anything like ours.

If your simulation is simulating things like particles or atoms, then you don't have direct access to whether they've arranged themselves into a "belief" unless you keep track of every possible way that an arrangement of atoms can be interpreted as a "belief".

Comment author: shminux 12 October 2012 08:25:38PM 1 point [-]

Sure, if you run your computation unstructured at the level of quarks and leptons, then you cannot tell what happens in the minds of simulated humans. This would be silly, and no one does any non-trivial bit of programming this way. There are always multi-level structures, like modules. classes, interfaces... Some of these can be created on the fly as needed (admittedly, this is a tricky part, though by no means impossible). So after a time you end up with a module that represents, say, a human, with sub-modules representing beliefs and interfaces representing communication with other humans, etc. And now you are well equipped to set up an alert.

Comment author: chaosmosis 13 October 2012 04:55:22AM 0 points [-]

If the Great Psychicator uses triggers on a level of reality less precise than the atomic or subatomic ones, then I believe its triggers could not possibly be precise enough to A. prevent science from discovering psychic powers and simultaneously B. allow normal people not doing science access to its psychic powers.

If there's a flaw in its model of the universe, we can exploit that and use the flaw do to science (this would probably involve some VERY complex work arounds, but the universe is self consistent so it seems possible in theory). So the relevant question is whether or not its model of the universe is better than ours, which is why I concede that a sufficiently complex Great Psychicator would be able to trick us.

Comment author: Eugine_Nier 13 October 2012 05:22:21AM 0 points [-]

If the Great Psychicator uses triggers on a level of reality less precise than the atomic or subatomic ones, then I believe its triggers could not possibly be precise enough to A. prevent science from discovering psychic powers and simultaneously B. allow normal people not doing science access to its psychic powers.

No, it just needs to be better at optimizing than we are.

Comment author: chaosmosis 13 October 2012 08:50:10AM 0 points [-]

I don't know exactly what you mean by "optimizing", but if your main point is that it's an issue of comparative advantage then I agree. Or, if your point is that it's not sufficient for humans to have a better model of reality in the abstract, we'd also need to be able to apply that model in such a way as to trick the GP and that might not be possible depending on the nature of the GP's intervention, I can agree with that as well.