Comment author: wedrifid 19 February 2011 05:35:22AM *  1 point [-]

To put it another way, is your decision to chew gum determined by EDT our by your genes? Pick one.

It can be both. Causation is not exclusionary. I'm suggesting that you are mistaken about the aforementioned handling.

Comment author: ArthurB 19 February 2011 06:23:38AM 0 points [-]

No it can't. If you use a given decision theory, your actions are entirely determined by your preferences and your sensory inputs.

Comment author: AlephNeil 19 February 2011 04:24:05AM *  0 points [-]

Omega's decision is based on our decision algorithm itself.

Yes, but this dependence factors through the strategy that the algorithm produces. Read Eliezer's TDT document(pdf) where he talks about 'action-determined' and 'decision-determined' problems. Whereas CDT only 'wins' at the former, TDT 'wins' at both. Note that in a decision-determined problem, Omega is allowed but a TDT-minimizer is not.

I argue that an EDT agent should integrate his own decision as evidence.

You appear to mean: When an EDT agent hypothesizes "suppose my decision were X" it's subsequently allowed to say "so in this hypothetical I'll actually do Y instead."

But that's not how EDT works - your modification amounts to a totally different algorithm, which you've conveniently named "EDT".

If EDT's decision is to one-box, then Omega's prediction is that EDT one box, and EDT should two-box.

...then Omega's prediction is that EDT will two-box and oops - goodbye prize.

Comment author: ArthurB 19 February 2011 05:23:37AM 0 points [-]

But that's not how EDT works - your modification amounts to a totally different algorithm, which you've conveniently named "EDT".

EDT measures expected value after the action has been taken, but the output of EDT has no reason to be ignored by EDT if it is relevant to the calculation.

...then Omega's prediction is that EDT will two-box and oops - goodbye prize.

It loses, but it is generally claimed that EDT one boxes.

Comment author: wedrifid 19 February 2011 03:25:37AM 2 points [-]

What is generally meant is that having this gene induces a preference to chew gum, which is generally acted upon by whatever decision algorithm is used.

This is actually not what is meant when considering Solomon's problem. They really do mean the actual decision.

Comment author: ArthurB 19 February 2011 05:20:48AM *  0 points [-]

This case is handled in the previous sentence. If this is your actual decision, and your actual decision is the product of a decision algorithm, then your decision algorithm is not EDT.

To put it another way, is your decision to chew gum determined by EDT our by your genes? Pick one.

Comment author: ArthurB 10 September 2009 06:25:43PM -1 points [-]

As it's been pointed out, this is not an anthropic problem, however there still is a paradox. I'm may be stating the obvious, but the root of the problem is that you're doing something fishy when you say that the other people will think the same way and that your decision will theirs.

The proper way to make a decision is to have a probability distribution on the code of the other agents (which will include their prior on your code). From this I believe (but can't prove) that you will take the correct course of action.

Newcomb like problem fall in the same category, the trick is that there is always a belief about someone's decision making hidden in the problem.

Comment author: SforSingularity 27 August 2009 10:13:00PM *  -2 points [-]

The answer is yes.

I think we have just established that the answer is no... for the definition of "approximate" that you gave...

Comment author: ArthurB 27 August 2009 11:20:12PM 1 point [-]

Hum no you haven't. The approximation depends on the scale of course.

Comment author: Johnicholas 27 August 2009 09:44:26PM 0 points [-]

And if you use the ArthurB definition of "approximately" (which is an excellent definition for many purposes), then a piecewise constant function would do just as well.

Comment author: ArthurB 27 August 2009 10:05:57PM 0 points [-]

Indeed.

But I may have gotten "scale" wrong here. If we scale the error at the same time as we scale the part we're looking at, then differentiability is necessary and sufficient. If we're concerned about approximating the function, on a smallish part, then continuous is what we're looking for.

Comment author: SforSingularity 27 August 2009 09:28:33PM 0 points [-]

ok, but with this definition of "approximate", a piecewise linear function with finitely many pieces cannot approximate the Weierstrass function.

Furthermore, two nonidentical functions f and g cannot approximate each other. Just choose, for a given x, epsilon less than f(x) and g(x); then no matter how small your neighbourhood is, |f(x) - g(x)| > epsilon.

Comment author: ArthurB 27 August 2009 09:45:04PM 1 point [-]

ok, but with this definition of "approximate", a piecewise linear function with finitely many pieces cannot approximate the Weierstrass function.

The original question is whether a continuous function can be approximated by a linear function at a small enough scale. The answer is yes.

If you want the error to decrease linearly with scale, then continuous is not sufficient of course.

Comment author: SforSingularity 27 August 2009 08:38:27PM 0 points [-]

taboo "approximate" and restate.

Comment author: ArthurB 27 August 2009 09:04:49PM 1 point [-]

I defined approximate in an other comment.

Approximate around x : for every epsilon > 0, there is a neighborhood of x over which the absolute difference between the approximation and the approximation function is always lower than epsilon.

Adding a slop to a small segment doesn't help or hurt the ability to make a local approximation, so continuous is both sufficient and necessary.

Comment author: SforSingularity 27 August 2009 06:53:04PM *  -1 points [-]

you won't see the difference

that is because our eyes cannot see nowhere differentiable functions, so a "picture" of the Weierstrass function is some piecewise linear function that is used as a human-readable symbol for it.

Consider that when you look at a "picture" of the Weierstrass function and pick a point on it, you would swear to yourself that the curve happens to be "going up" at that point. Think about that for a second: the function isn't differentialble - it isn't "going" anywhere at that point!

Comment author: ArthurB 27 August 2009 07:01:13PM *  2 points [-]

that is because our eyes cannot see nowhere differentiable functions

That is because they are approximated by piecewise linear functions.

Consider that when you look at a "picture" of the Weierstrass function and pick a point on it, you would swear to yourself that the curve happens to be "going up" at that point. Think about that for a second: the function isn't differentialble - it isn't "going" anywhere at that point!

It means on any point you can't make a linear approximation whose precision increases like the inverse of the scale, it doesn't mean you can't approximate.

Comment author: Johnicholas 27 August 2009 10:29:09AM 3 points [-]

Under the usual mathematical meanings of "continuous", "function" and so on, this is strictly false. See: http://en.wikipedia.org/wiki/Weierstrass_function

It might be true under some radically intuitionist interpretation (a family of philosophies I have a lot of sympathy with). For example, I believe Brouwer argued that all "functions" from "reals" to "reals" are "continuous", though he was using his own interpretation of the terms inside of quotes. However, such an interpretation should probably be explained rather than assumed. ;)

Comment author: ArthurB 27 August 2009 06:33:11PM 1 point [-]

No he's right. The Weierstrass function can be approximated with a piecewise linear function. It's obvious, pick N equally spaced points and join then linearly. For N big enough, you won't see the difference. It means that is is becoming infinitesimally small as N gets bigger.

View more: Prev | Next