dlthomas comments on Thoughts on the Singularity Institute (SI) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1270)
It's mostly a question for philosophy of mind, I think specifically a question about intentionality. I think the closest you'll get to a mathematical framework is control theory; controllers are a weird edge case between tools and very simple agents. Control theory is mathematically related to Bayesian optimization, which I think Eliezer believes is fundamental to intelligence: thus identifying cases where a controller is a tool or an agent would be directly relevant. But I don't see how the mathematics, or any mathematics really, could help you. It's possible that someone has mathematized arguments about intentionality by using information theory or some such, you could Google that. Even so I think that at this point the ideas are imprecise enough such that plain ol' philosophy is what we have to work with. Unfortunately AFAIK very few people on LW are familiar with the relevant parts of philosophy of mind.
It is an EY's announced intention to work toward an AI that is provably friendly. "Provably" means that said AI is defined in some mathematical framework first. I don't see how one can make much progress in that area before rigorously defining intentionality.
I guess I am getting ahead of myself here. What would a relevant mathematical framework entail, to begin with?
I don't think that idiom means what you think it means.
Thank you, fixed.
You were probably fishing for "jumping the gun".
Yeah, should have been shooting instead of fishing.
It could be said that you shot yourself in the foot by jumping the shark while fishing for a gun.