SteveG comments on Superintelligence 8: Cognitive superpowers - Less Wrong

7 Post author: KatjaGrace 04 November 2014 02:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (95)

You are viewing a single comment's thread.

Comment author: SteveG 05 November 2014 02:39:04AM 0 points [-]

Occasionally in this crew, people discuss the idea of computer simulations of the introduction of an AGI into our world. Such simulations could utilize advanced technology, but significant progress could be made even if they were not themselves an AGI.

I would like to hear how people might flesh out that research direction? I am not completely against trying to prove theorems about formal systems-it's just that the simulation direction is perfectly good virgin research territory. If we made progress along that path, it would also be much easier to explain.

Comment author: SteveG 06 November 2014 07:23:58PM 1 point [-]

Could whoever down-voted this please offer the reason for your vote? I'm interested.

Comment author: janos 07 May 2015 09:07:56PM 0 points [-]

This makes me think of two very different things.

One is informational containment, ie how to run an AGI in a simulated environment that reveals nothing about the system it's simulated on; this is a technical challenge, and if interpreted very strictly (via algorithmic complexity arguments about how improbable our universe is likely to be in something like a Solomonoff prior), is very constraining.

The other is futurological simulation; here I think the notion of simulation is pointing at a tool, but the idea of using this tool is a very small part of the approach relative to formulating a model with the right sort of moving parts. The latter has been tried with various simple models (eg the thing in Ch 4); more work can be done, but justifying the models&priors will be difficult.