Lumifer comments on Why Eat Less Meat? - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (513)
Usually people speak of preferences when there is a possibility of choice -- the agent can meaningfully choose between doing A and doing B.
This is not the case with respect to molecular models, search engines, and light switches.
At least for search engines, I would say there exist a meaningful level of description where it can be said that the search engine chooses which results to display in response to a query, approximately maximizing some kind of scoring function.
I don't think it is meaningful in the current context. The search engine is not an autonomous agent and doesn't choose anything any more than, say, the following bit of pseudocode: if (rnd() > 0.5) { print "Ha!" } else { print "Ooops!" }
"If you search for "potatoes" the engine could choose to return results for "tomatoes" instead...but will choose to return results for potatoes because it (roughly speaking) wants to maximize the usefulness of the search results."
"If I give you a dollar you could choose to tear it to shreds, but you instead will choose to put it in your wallet because (roughly speaking) you want to xyz..."
When you flip the light switch "on" it could choose to not allow current through the system, but it will flow current through the system because it wants current to flow through the system when it is in the "on" position.
Except for degree of complexity, what's the difference? "Choice" can be applied to anything modeled as an Agent.
Sorry, I read this as nonsense. What does it mean for a light switch to "want"?
To determine the "preferences" of objects which you are modeling as agents, see what occurs, and construct a preference function that explains those occurrences.
Example: This amoeba appears to be engaging in a diverse array of activities which I do not understand at all, but they all end up resulting in the maintenance of its physical body. I will therefore model it as "preferring not to die", and use that model to make predictions about how the amoeba will respond to various situations.
I think the light switch example is far fetched, but the search engine isn't. The point is whether there exist a meaningful level of description where framing the system behavior in terms of making choices to satisfy certain preferences is informative.
Don't forget that the original context was morality.
You don't think it is far-fetched to speak of morality of search engines?
Yes, it is.
The distinction you are making between the input-output function of a human as a "choice" vs. the input-output of a machine as "not-a-choice" sounds very reminiscent of the traditional naive / confused model of free will that people commonly have before dissolving the question...but you're a frequent poster here, so perhaps I've misunderstood your meaning. Are you using a specialized definition of the word "choice"?
I have no wish for this to develop into a debate about free will. Let me point out that just because I know the appropriate part of the Sequences does not necessarily mean I agree with it.
As a practical matter, speaking about choices of light switches seems silly. Given this, I don't see why speaking about choices of search engines is not silly. It might be useful conversational shorthand in some contexts, but I don't think it is useful in the context of talking about morality.
Ah, ok - sorry. The materialist, dissolved view of free will related questions has been a strongly held view of mine since a very young age, so my prior for a person who is aware of thesel yet subscribes to what I'll call the "naive view" for lack of the better word is very low.
It's not really the particulars of the sequences here which are in question - the people who say free will doesn't exist, and the people who say it does but redefine free will in funny ways, the pan-psychists, the compatiblists and non-compatiblists, all share in common a non-dualist view which does not allow them to label the search engine's processes and the human's processes as fundamentally, qualitatively different processes. This is a deep philosophical divide that has been debated for, as far as I am aware, at least two thousand years.
By analogy, speaking of choices of humans seems silly, since humans are made of the same basic laws.
The fundamental disagreement here runs rather deeply - it's not going to be possible to talk about this without diving into free will.
Philosophical disagreements aside, that doesn't seem to be a good way to construct priors for other people's views.
If I understood the causal mechanisms underlying the actions of humans as well as I do those underlying lightswitches, talking about the former as "choices" would seem as silly to me as talking that way about the latter does.
But I don't, so it doesn't.
I assume you don't understand the causal mechanisms underlying the actions of humans either. So why does talking about them as "choices" seem silly to you?
I agree with you. Whether we model something as an agent or an object is a feature of our map, not the territory. It's not useful to model light switches as agents because they are too simple, and looking at them through the lens of preferences is not simple or informative. Meanwhile, it is useful to model humans partially as preference maximizing agents to make approximations.
However, in the context of the larger discussion, I interpret Lumifer as treating the distinction between "choice" and "event" as a feature of the territory itself, and positing a fundamental qualitative difference between a "choice" and other sorts of events. My reply should be seen as an assertion that such qualitative differences are not features of the map - if it's impossible to model a light switch as having choices, then it's also impossible to model a human as having choices. (My actual belief is that it's possible to model both as having choices or not having them)
Is your actual belief that there are equivalent grounds for modeling both either way?
If so, I disagree... from my own perspective, modeling people as preference-maximizing agents is significantly more justified (due to differences in the territory) than modeling a light switch that way.
If not, to what do you attribute the differential?
...it is possible to model things either way, but it is more useful for some objects than others.
Modeling an object as agents is useful when the object exhibits a pattern of behavior which is roughly consistent with preference maximizing. A search engine is well modeled as an agent. A human is very well modeled as an agent.
A light switch is very poorly modeled as an agent. Thinking of it in terms of preference pattern doesn't make it any easier to predict its behavior. But you can model it as an agent, if you'd like.
By "justified" do you mean "useful"?
I am willing to adopt "useful" in place of "justified" if it makes this conversation easier. In which case my question could be rephrased "Is it equally useful to model both either way?"
To which your answer seems to be no... it's more useful to model a human as an agent than it is a light-switch. (I'm inferring, because despite introducing the "useful" language, what you actually say instead introduces the language of something being "well-modeled." But I'm assuming that by "well-modeled" you mean "useful.")
And your answer to the followup question is because the pattern of behavior of a light switch is different from that of a search engine or a human, such that adopting an intentional stance towards the former doesn't make it easier to predict.
Have I understood you correctly?
Yup. Modeling something as a preference maximizing agent is generally useful to adopt for things which systematically behave in ways that maximize certain outcomes in a diverse array of situations. It allows you to make accurate predictions even when you don't fully understand the mechanics of the events that occur in generating the events you are predicting.
(I distinguished useful and justified because I wasn't sure if "justified" had moral connotations in your usage)
Edit: On reading the wiki, I tend to agree with the views that the wiki attributes to Dennett. Thanks for the reference and the word "intentional stance".