David_Gerard comments on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (299)
From the context, I would presume "about" in the sense of "this is why it's fascinating to the people who make a big deal about it". (I realise the stated reason for LW interest is the scenario of an AI whose source code is known to Omega having to make a decision, but the people being fascinated are humans.)
Given that your source code is known to Omega, your decision cannot be 'made'.
Yes it can.
Perhaps it would sound better: Once a deterministic method of making a determination (along with all of the data that method will take into account) are set, it cannot be reasonably said that a decision is being made. A Customer Service Representative that follows company policy regardless of the outcome isn't making decisions, he's abdicating the decision-making to someone else.
It's probable that free will doesn't exist, in which case decisions don't exist and agenthood is an illusion; that would be consistent with the line of thinking which has produced the most accurate observations to date. I will continue to act as though I am an agent, because on the off chance I have a choice it is the choice that I want.
Really?
Oddly enough, those are about programming. There's nothing in there that is advice to robots about what decisions to make.
It is all about robots -- deterministic machines -- performing activities that everyone unproblematically calls "making decisions". According to what you mean by "decision", they are inherently incapable of doing any such thing. Robots, in your view, cannot be "agents"; a similar Google search shows that no-one who works with robots has any problem describing them as agents.
So, what do you mean by "decision" and "agenthood"? You seem to mean something ontologically primitive that no purely material entity can have; and so you conclude that if materialism is true, nothing at all has these things. Is that your view?
It would be better to say that materialism being true has the prerequisite of determinism being true, in which case "decisions" do not have the properties we're crossing on.
Still not true. The prediction capability of other agents in the same universe does not make the decisions made by an agent into not-decisions. (This is a common confusion that often leads to bad decision-theoretic claims.)
If free will is not the case, there are no agents (anymore?)
If it is the case that the universe in the past might lead to an agent making one of two or more decisions, then free will is the case and perfect prediction is impossible; if it is not the case that an entity can take any one of two or more actions, then free will is not the case and perfect prediction is possible.
Note that it is possible for free will to exist but for me to not be one of the agents. Sometimes I lose sleep over that.
A starting point.
The scale does not decide the weight of the load.
A sufficiently intelligent and informed AI existing in the orbit of Alpha Centauri but in no way interacting with any other agent (in the present or future) does not by its very existence remove the capability of every agent in the galaxy to make decisions. That would be a ridiculous way to carve reality.
The characteristic of the universe that allows or prevents the existence of such an AI is what is being carved.
Can you clarify what you mean by "agent"?
One of the necessary properties of an agent is that it makes decisions.
I infer from context that free will is necessary to make decisions on your model... confirm?
Yeah, the making of a decision (as opposed to a calculation) and the influence of free will are coincident.
So... hrm.
How do I tell whether something is a decision or not?
By the causal chain that goes into it. Does it involve modeling the problem and considering values and things like that?
So if a programmable thermostat turns the heat on when the temperature drops below 72 degrees F, whether that's a decision or not depends on whether its internal structure is a model of the "does the heat go on?" problem, whether its set-point is a value to consider, and so forth. Perhaps reasonable people can disagree on that, and perhaps they can't, but in any case if I turn the heat on when the temperature drops below 72 degrees F most reasonable people would agree that my brain has models and values and so forth, and therefore that I have made a decision.
(nods) OK, that's fair. I can live with that.
The thermostat doesn't model the problem. The engineer who designed the thermostat modeled the problem, and the thermostat's gauge is a physical manifestation of the engineer's model.
It's in the same sense that I don't decide to be hungry - I just am.
ETA: Dangit, I could use a sandwich.
Combining that assertion with your earlier one, I get the claim that the thermostat's turning the heat on is a decision, since the causal chain that goes into it involves modeling the problem, but it isn't the thermostat's decision, but rather the designer's decision.
Or, well, partially the designer's.
Presumably, since I set the thermostat's set-point, it's similarly not the thermostat's values which the causal chain involves, but mine.
So it's a decision being made collectively by me and the engineer, I guess.
Perhaps some other agents, depending on what "things like that" subsumes.
This seems like an odd way to talk about the situation, but not a fatally odd way.