Johnicholas comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (156)
I like the paper, but am wondering how (or whether) it applies to TDT and acausal trading. Doesn't the trading imply a form of convergence theorem among very powerful TDT agents (they should converge on an average utility function constructed across all powerful TDT agents in logical space)?
Or have I missed something here? (I've been looking around on Less Wrong for a good post on acausal trading, and am finding bits and pieces, but no overall account.)
There was an incident of censorship by EY relating to acausal trading - the community's confused response (chilling effects? agreement?) to that incident explains why there is no overall account.
No, I think it's more that the idea (acausal trading) is very speculative and we don't have a good theory of how it might actually work.
Thanks for this... Glad it's not being censored!
I did post the following on one of the threads, which suggested to me a way in which it would happen or at least get started
Again, apologies if this idea is nuts or just won't work. However, if true, it did strike me as increasing the chance of a simulation hypothesis. (It gives powerful TDT AIs a motivation to simulate as many civilizations as they can, and in a "state of nature", so that they get to see what the utility functions are like, and how likely they are to also build TDT-implementing AIs...)
It was censored, though there's a short excerpt here.
By the way, I still can't stop thinking about that post after 6 months. I think it's my favorite wild-idea scenario I've ever heard of.