Tim_Tyler comments on What I Think, If Not Why - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (100)
Well, that depends on the wirehead problem - and it is certainly not elementary. The problem is with the whole idea that there may be something such as a "friendly top goal" in the first place.
The idea that a fully self-aware powerful agent that has access to its own internals can be made to intrinsically have environment-related goals - or any other kind of external referents - is a challenging and difficult one - and success at doing this has yet to be convincingly demonstrated. It is possible - if you "wall off" bits of the superintelligence - but then you have the problem of the superintelligence finding ways around the walls.