David_Gerard comments on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? - Less Wrong

15 Post author: CarlShulman 19 June 2013 01:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (299)

You are viewing a single comment's thread. Show more comments above.

Comment author: David_Gerard 19 June 2013 09:20:51AM *  0 points [-]

From the context, I would presume "about" in the sense of "this is why it's fascinating to the people who make a big deal about it". (I realise the stated reason for LW interest is the scenario of an AI whose source code is known to Omega having to make a decision, but the people being fascinated are humans.)

Comment author: Decius 20 June 2013 02:30:05AM -2 points [-]

Given that your source code is known to Omega, your decision cannot be 'made'.

Comment author: wedrifid 20 June 2013 03:47:06AM 0 points [-]

Given that your source code is known to Omega, your decision cannot be 'made'.

Yes it can.

Comment author: Decius 20 June 2013 05:22:35AM 0 points [-]

Perhaps it would sound better: Once a deterministic method of making a determination (along with all of the data that method will take into account) are set, it cannot be reasonably said that a decision is being made. A Customer Service Representative that follows company policy regardless of the outcome isn't making decisions, he's abdicating the decision-making to someone else.

It's probable that free will doesn't exist, in which case decisions don't exist and agenthood is an illusion; that would be consistent with the line of thinking which has produced the most accurate observations to date. I will continue to act as though I am an agent, because on the off chance I have a choice it is the choice that I want.

Comment author: RichardKennaway 20 June 2013 09:15:39AM 3 points [-]

Once a deterministic method of making a determination (along with all of the data that method will take into account) are set, it cannot be reasonably said that a decision is being made.

Really?

Comment author: Decius 21 June 2013 02:16:34AM 0 points [-]

Oddly enough, those are about programming. There's nothing in there that is advice to robots about what decisions to make.

Comment author: RichardKennaway 21 June 2013 08:20:57AM *  1 point [-]

There's nothing in there that is advice to robots about what decisions to make.

It is all about robots -- deterministic machines -- performing activities that everyone unproblematically calls "making decisions". According to what you mean by "decision", they are inherently incapable of doing any such thing. Robots, in your view, cannot be "agents"; a similar Google search shows that no-one who works with robots has any problem describing them as agents.

So, what do you mean by "decision" and "agenthood"? You seem to mean something ontologically primitive that no purely material entity can have; and so you conclude that if materialism is true, nothing at all has these things. Is that your view?

Comment author: Decius 22 June 2013 11:13:25PM 0 points [-]

It would be better to say that materialism being true has the prerequisite of determinism being true, in which case "decisions" do not have the properties we're crossing on.

Comment author: wedrifid 20 June 2013 05:33:30PM 0 points [-]

Perhaps it would sound better: Once a deterministic method of making a determination (along with all of the data that method will take into account) are set, it cannot be reasonably said that a decision is being made.

Still not true. The prediction capability of other agents in the same universe does not make the decisions made by an agent into not-decisions. (This is a common confusion that often leads to bad decision-theoretic claims.)

Comment author: Decius 21 June 2013 02:32:49AM *  0 points [-]

If free will is not the case, there are no agents (anymore?)

If it is the case that the universe in the past might lead to an agent making one of two or more decisions, then free will is the case and perfect prediction is impossible; if it is not the case that an entity can take any one of two or more actions, then free will is not the case and perfect prediction is possible.

Note that it is possible for free will to exist but for me to not be one of the agents. Sometimes I lose sleep over that.

Comment author: wedrifid 21 June 2013 06:37:00AM *  1 point [-]

If free will is not the case, there are no agents (anymore?)

A starting point.

Comment author: Decius 22 June 2013 11:01:23PM -1 points [-]

The scale does not decide the weight of the load.

Comment author: wedrifid 23 June 2013 04:50:25AM *  0 points [-]

The scale does not decide the weight of the load.

A sufficiently intelligent and informed AI existing in the orbit of Alpha Centauri but in no way interacting with any other agent (in the present or future) does not by its very existence remove the capability of every agent in the galaxy to make decisions. That would be a ridiculous way to carve reality.

Comment author: Decius 23 June 2013 05:33:53AM 0 points [-]

The characteristic of the universe that allows or prevents the existence of such an AI is what is being carved.

Comment author: TheOtherDave 21 June 2013 03:03:38AM 1 point [-]

Can you clarify what you mean by "agent"?

Comment author: Decius 22 June 2013 10:58:59PM 0 points [-]

One of the necessary properties of an agent is that it makes decisions.

Comment author: TheOtherDave 23 June 2013 01:08:21AM 0 points [-]

I infer from context that free will is necessary to make decisions on your model... confirm?

Comment author: Decius 23 June 2013 05:38:56AM 0 points [-]

Yeah, the making of a decision (as opposed to a calculation) and the influence of free will are coincident.

Comment author: TheOtherDave 20 June 2013 05:49:17PM 0 points [-]

So... hrm.
How do I tell whether something is a decision or not?

Comment author: Luke_A_Somers 20 June 2013 09:24:47PM 0 points [-]

By the causal chain that goes into it. Does it involve modeling the problem and considering values and things like that?

Comment author: TheOtherDave 20 June 2013 09:34:24PM 1 point [-]

So if a programmable thermostat turns the heat on when the temperature drops below 72 degrees F, whether that's a decision or not depends on whether its internal structure is a model of the "does the heat go on?" problem, whether its set-point is a value to consider, and so forth. Perhaps reasonable people can disagree on that, and perhaps they can't, but in any case if I turn the heat on when the temperature drops below 72 degrees F most reasonable people would agree that my brain has models and values and so forth, and therefore that I have made a decision.

(nods) OK, that's fair. I can live with that.

Comment author: Luke_A_Somers 20 June 2013 09:42:31PM *  1 point [-]

The thermostat doesn't model the problem. The engineer who designed the thermostat modeled the problem, and the thermostat's gauge is a physical manifestation of the engineer's model.

It's in the same sense that I don't decide to be hungry - I just am.

ETA: Dangit, I could use a sandwich.

Comment author: TheOtherDave 20 June 2013 10:03:44PM 0 points [-]

Combining that assertion with your earlier one, I get the claim that the thermostat's turning the heat on is a decision, since the causal chain that goes into it involves modeling the problem, but it isn't the thermostat's decision, but rather the designer's decision.
Or, well, partially the designer's.
Presumably, since I set the thermostat's set-point, it's similarly not the thermostat's values which the causal chain involves, but mine.
So it's a decision being made collectively by me and the engineer, I guess.
Perhaps some other agents, depending on what "things like that" subsumes.

This seems like an odd way to talk about the situation, but not a fatally odd way.