Postal_Scale comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (156)
I like the paper, but am wondering how (or whether) it applies to TDT and acausal trading. Doesn't the trading imply a form of convergence theorem among very powerful TDT agents (they should converge on an average utility function constructed across all powerful TDT agents in logical space)?
Or have I missed something here? (I've been looking around on Less Wrong for a good post on acausal trading, and am finding bits and pieces, but no overall account.)
It does indeed imply a form of convergence. I would assume Stuart thinks of the convergence as an artifact of the game environment the agents are in. Not a convergence in goals, just behavior. Albeit the results are basically the same.
If there's convergence in goals, then we don't have to worry about making an AI with the wrong goals. If there's only convergence in behavior, then we do, because building an AI with the wrong goals will shift the convergent behavior in the wrong direction. So I think it makes sense for Stuart's paper to ignore acausal trading and just talk about whether there is convergence in goals.
Not necessarily, it might destroy the earth before its goals converge.