thomblake comments on Decision theory: An outline of some upcoming posts - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (30)
The "meat" is clearly implementing a computation of the type I described, whereas a tree or rock isn't. Do you dispute that?
A person who has died is no longer running such a computation, but until his brain decays, the agent-algorithm that he was running before he died can theoretically still be retrieved from his brain.
Your point seems to be that part of FAI theory should be a general and rigorous theory of how to extract preferences from any given object. Only then could we have sufficient theoretical support for any specific procedures for extracting preferences from human beings.
You may be right (I'm not sure) but I think that's a separate question from "why one might design [a could/should] agent", which is what started this thread. For that, the informal definition of "agent" that I gave seems to be sufficient, at least to understand the question.
Many would dispute that, possibly including Luciano Floridi. A tree or even a rock engages in information processing - it exchanges heat, electrons,and such with its surroundings, for starters. And there is almost certainly a decompression you can run on some of the information to fit whatever sort of pattern you're looking for.
I've explained before why this reasoning is misguided: to get arbitrary desired information processing out of random processes, you have to apply an ever-expanding interpretation, meaning that any model that calls e.g. a rock a computer is strictly longer than a model that doesn't because the former would have to include all of the latter, plus random data.
So a rock is not a general use computer (though it can be used to compute a result, if the computation you want to perform happens to be isomorphic to whatever e.g. heat transfer is going on right now).
Now, with that in mind, I was among those who claimed that rocks are agents as defined by AnnaSalamon et. al, so how do I reconcile this with the claim that rocks aren't computers?
Well, it's like this: An agent, as defined here, has internal dynamics that could, in principle, be understood as a network of counterfactuals and preferences. A computer, OTOH, does in fact do the work of altering your beliefs about an arbitrary computation. (Generally, that just means concentrating your probability distribution onto the right answer, when before you just figured it was within some range.)
And since Eliezer_Yudkowsky claims that even a pebble embodies the laws of physics, which are nothing but a causal network containing counterfactuals and a species of preference (like energy minimization), that means the term "agent" is carving out a much huger chunk of conceptspace than I think AnnaSalamon et al intended. Which is what makes it hard for me to understand what the agent concept is supposed to be distinguished from.
Okay, come on guys, give me a break here; I think this post merits an explanation of where I erred rather than (or at least on top of) a downmod. Sure, I might have said something stupid, but I clearly laid out my reasoning about an important distinction that is being made. Help me out here.