Lumifer comments on Why Eat Less Meat? - Less Wrong

48 Post author: peter_hurford 23 July 2013 09:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (513)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 11 January 2014 02:48:31PM 0 points [-]

Paperclippers, molecular modelers, search engines, seek to maximize a simple set of preferences (number of paperclips, best fit model, best search). They have "preferences", but not morally relevant ones.

Usually people speak of preferences when there is a possibility of choice -- the agent can meaningfully choose between doing A and doing B.

This is not the case with respect to molecular models, search engines, and light switches.

Comment author: V_V 11 January 2014 03:24:05PM 0 points [-]

At least for search engines, I would say there exist a meaningful level of description where it can be said that the search engine chooses which results to display in response to a query, approximately maximizing some kind of scoring function.

Comment author: Lumifer 11 January 2014 03:34:54PM 1 point [-]

there exist a meaningful level of description where it can be said that the search engine chooses which results to display in response to a query

I don't think it is meaningful in the current context. The search engine is not an autonomous agent and doesn't choose anything any more than, say, the following bit of pseudocode: if (rnd() > 0.5) { print "Ha!" } else { print "Ooops!" }

Comment author: Ishaan 11 January 2014 08:10:32PM *  0 points [-]

"If you search for "potatoes" the engine could choose to return results for "tomatoes" instead...but will choose to return results for potatoes because it (roughly speaking) wants to maximize the usefulness of the search results."

"If I give you a dollar you could choose to tear it to shreds, but you instead will choose to put it in your wallet because (roughly speaking) you want to xyz..."

When you flip the light switch "on" it could choose to not allow current through the system, but it will flow current through the system because it wants current to flow through the system when it is in the "on" position.

Except for degree of complexity, what's the difference? "Choice" can be applied to anything modeled as an Agent.

Comment author: Lumifer 11 January 2014 09:44:35PM 1 point [-]

When you flip the light switch "on" it could choose to not allow current through the system, but it will flow current through the system because it wants current to flow through the system when it is in the "on" position.

Sorry, I read this as nonsense. What does it mean for a light switch to "want"?

Comment author: Ishaan 11 January 2014 09:57:44PM *  1 point [-]

To determine the "preferences" of objects which you are modeling as agents, see what occurs, and construct a preference function that explains those occurrences.

Example: This amoeba appears to be engaging in a diverse array of activities which I do not understand at all, but they all end up resulting in the maintenance of its physical body. I will therefore model it as "preferring not to die", and use that model to make predictions about how the amoeba will respond to various situations.

Comment author: V_V 14 January 2014 05:51:34PM 0 points [-]

I think the light switch example is far fetched, but the search engine isn't. The point is whether there exist a meaningful level of description where framing the system behavior in terms of making choices to satisfy certain preferences is informative.

Comment author: Lumifer 14 January 2014 07:59:38PM 1 point [-]

Don't forget that the original context was morality.

You don't think it is far-fetched to speak of morality of search engines?

Comment author: V_V 15 January 2014 12:04:59AM 0 points [-]

Yes, it is.

Comment author: Ishaan 11 January 2014 08:04:58PM *  -2 points [-]

The distinction you are making between the input-output function of a human as a "choice" vs. the input-output of a machine as "not-a-choice" sounds very reminiscent of the traditional naive / confused model of free will that people commonly have before dissolving the question...but you're a frequent poster here, so perhaps I've misunderstood your meaning. Are you using a specialized definition of the word "choice"?

Comment author: Lumifer 11 January 2014 09:06:24PM *  1 point [-]

I have no wish for this to develop into a debate about free will. Let me point out that just because I know the appropriate part of the Sequences does not necessarily mean I agree with it.

As a practical matter, speaking about choices of light switches seems silly. Given this, I don't see why speaking about choices of search engines is not silly. It might be useful conversational shorthand in some contexts, but I don't think it is useful in the context of talking about morality.

Comment author: Ishaan 11 January 2014 09:23:17PM *  -1 points [-]

Let me point out that just because I know the appropriate part of the Sequences does not necessarily mean I agree with it.

Ah, ok - sorry. The materialist, dissolved view of free will related questions has been a strongly held view of mine since a very young age, so my prior for a person who is aware of thesel yet subscribes to what I'll call the "naive view" for lack of the better word is very low.

It's not really the particulars of the sequences here which are in question - the people who say free will doesn't exist, and the people who say it does but redefine free will in funny ways, the pan-psychists, the compatiblists and non-compatiblists, all share in common a non-dualist view which does not allow them to label the search engine's processes and the human's processes as fundamentally, qualitatively different processes. This is a deep philosophical divide that has been debated for, as far as I am aware, at least two thousand years.

As a practical matter, speaking about choices of light switches seems silly. Given this, I don't see why speaking about choices of search engines is not silly.

By analogy, speaking of choices of humans seems silly, since humans are made of the same basic laws.

The fundamental disagreement here runs rather deeply - it's not going to be possible to talk about this without diving into free will.

Comment author: Lumifer 11 January 2014 09:42:46PM 2 points [-]

has been a strongly held view of mine since a very young age, so my prior ... is very low.

Philosophical disagreements aside, that doesn't seem to be a good way to construct priors for other people's views.

Comment author: TheOtherDave 11 January 2014 09:56:43PM *  0 points [-]

If I understood the causal mechanisms underlying the actions of humans as well as I do those underlying lightswitches, talking about the former as "choices" would seem as silly to me as talking that way about the latter does.

But I don't, so it doesn't.

I assume you don't understand the causal mechanisms underlying the actions of humans either. So why does talking about them as "choices" seem silly to you?

Comment author: Ishaan 11 January 2014 10:09:21PM *  1 point [-]

I agree with you. Whether we model something as an agent or an object is a feature of our map, not the territory. It's not useful to model light switches as agents because they are too simple, and looking at them through the lens of preferences is not simple or informative. Meanwhile, it is useful to model humans partially as preference maximizing agents to make approximations.

However, in the context of the larger discussion, I interpret Lumifer as treating the distinction between "choice" and "event" as a feature of the territory itself, and positing a fundamental qualitative difference between a "choice" and other sorts of events. My reply should be seen as an assertion that such qualitative differences are not features of the map - if it's impossible to model a light switch as having choices, then it's also impossible to model a human as having choices. (My actual belief is that it's possible to model both as having choices or not having them)

Comment author: TheOtherDave 12 January 2014 02:58:30AM 0 points [-]

Is your actual belief that there are equivalent grounds for modeling both either way?

If so, I disagree... from my own perspective, modeling people as preference-maximizing agents is significantly more justified (due to differences in the territory) than modeling a light switch that way.

If not, to what do you attribute the differential?

Comment author: Ishaan 12 January 2014 11:59:12AM *  0 points [-]

Is your actual belief that there are equivalent grounds for modeling both either way?

...it is possible to model things either way, but it is more useful for some objects than others.

It's not useful to model light switches as agents because they are too simple, and looking at them through the lens of preferences is not simple or informative. Meanwhile, it is useful to model humans partially as preference maximizing agents to make approximations.

Modeling an object as agents is useful when the object exhibits a pattern of behavior which is roughly consistent with preference maximizing. A search engine is well modeled as an agent. A human is very well modeled as an agent.

A light switch is very poorly modeled as an agent. Thinking of it in terms of preference pattern doesn't make it any easier to predict its behavior. But you can model it as an agent, if you'd like.

By "justified" do you mean "useful"?

Comment author: TheOtherDave 12 January 2014 04:12:16PM 0 points [-]

I am willing to adopt "useful" in place of "justified" if it makes this conversation easier. In which case my question could be rephrased "Is it equally useful to model both either way?"

To which your answer seems to be no... it's more useful to model a human as an agent than it is a light-switch. (I'm inferring, because despite introducing the "useful" language, what you actually say instead introduces the language of something being "well-modeled." But I'm assuming that by "well-modeled" you mean "useful.")

And your answer to the followup question is because the pattern of behavior of a light switch is different from that of a search engine or a human, such that adopting an intentional stance towards the former doesn't make it easier to predict.

Have I understood you correctly?

Comment author: Ishaan 12 January 2014 06:38:57PM *  0 points [-]

Yup. Modeling something as a preference maximizing agent is generally useful to adopt for things which systematically behave in ways that maximize certain outcomes in a diverse array of situations. It allows you to make accurate predictions even when you don't fully understand the mechanics of the events that occur in generating the events you are predicting.

(I distinguished useful and justified because I wasn't sure if "justified" had moral connotations in your usage)

Edit: On reading the wiki, I tend to agree with the views that the wiki attributes to Dennett. Thanks for the reference and the word "intentional stance".