Where are the new monthly threads when I need them? A pox on the +11 EDT zone!
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
If you're new to Less Wrong, check out this welcome post.
I gave up on trying to make a human-blind/sandboxed AI when I realized that even if you put it in a very simple world nothing like ours, it still has access to it own source code, or even just the ability to observe and think about it's own behavior.
Presumably any AI we write is going to be a huge program. That gives it lots of potential information about how smart we are and how we think. I can't figure out how to use that information, but I can't rule out that it could, and I can't constrain it's access to that information. (Or rather, if I know how to do that, I should go ahead and make it not-hostile in the first place.)
If we were really smart, we could wake up alone in a room and infer how we evolved.
Is this necessarily true? This kind of assumption seems especially prone to error. It seems akin to assuming that a sufficiently intelligent brain-in-a-vat could figure out its own anatomy purely by introspection.
Super-intelligent = able to extrapolate just about anything from a very narrow range of data? (The data set would be especially limited if the AI ha... (read more)