Who says fruit is to be prefered to foliage ?
I often wonder about something along this line when speaking of education. Are students learning for getting a job (fruit) or for culture (foliage) ? Choosing between one or the other should it be the choice of the student or of the society ? I believe the most common answer is : we study for job and the choice is made by society. But I, for one, cannot so easily dismiss the question. It has too much to do with meaning of life: are people living to work/act or to understand/love.
That's obviously not the only wa...
A true Omega needs to make both P(box B full | take one box) and P(box B empty | take both boxes) high. The proposed scheme ensures that P(box B full | habitual one-boxer) and P(box B empty | habitual two-boxer) are high, which is not quite the same.
If I understand correctly the distinction you're making between habitual one boxer and take one box the first kind would be about the past player history and the other one about the future. If so I guess you are right. I'm indeed using the past to make my prediction, as using the future is beyond my reach...
It's conforting sometimes to read from someone else that rationality is not the looser's way, and arguably more so for Prisonner's Dilemma than Newcomb's if your consider the current state of our planet and the tragedy of commons.
I'm writing this because I believe I suceeded writing a computer program (it is so simple I can't call it an AI) able to actually simulate Omega in a Newcomb game. What I describe below may look like an iterated Newcomb's problem. But I claim it is not so and will explain why.
When using my program the human player will actually b...
I don't know if you have seen it, but I have posted an actual program playing Newcomb's game. As far as I understand what I have done, this is not an Iterated Newcomb's problem, but a single shot one. You should also notice that the calibration phase does not returns output to the player (well, I added some showing of reached accuracy, but this is not necessary).
If I didn't overviewed some detail, the predictor accuracy is currently tuned at above 90% but any level of accuracy is reachable.
As I explained yesterday, the key point was to run some "cali...
I posted a possible program doing what I describe in another comment. The trick as expected is that it's easier to change the human player understanding of the nature of omega to reach the desired predictability. In other words : you just remove human free will (and running my program the player learn very quickly that is in his best interrest), then you play. What is interresting is that the only way compatible with Newcomb's problem description to remove his free will is to make it a one-boxer. The incentive to make it a two-boxer would be to exhibit a bad predictor and that's not compatible with Newcomb's problem.
Here is an actual program (written in python) implementing the described experiment. It has two stages. The first part is just calibration intending to find out if the player is one boxing or two boxing. The second is a straightforward non iterated Newcomb problem. Some randomness is used to avoid the player to exactly know when calibration stops and test begin, but calibration part does not care at all if it will predict the player is a one boxer or a two boxer it is just intended to create an actual predictor behaving as described in Newcomb's.
print &qu
... If my program runs as long as wished accuracy is nor reached it can reach any accuracy. Truly random numbers are also expected to deviate toward extremes sometimes in the long run (if they do not behave like that they are not random). As it is very rare events, against random players the expected accuracy would certainly never be reached in a human life.
Why I claim is the "calibration phase" described above takes place before Newcomb's problem. When the actual game starts the situation described in Newcomb's problem is exactly what is reached. TH...
As proposed, the idea is to run the program in "test mode". To simulate the super-being Omenga we give it the opportunity to decide when game stops being a simulation (predictor calibration) and start being the "real game". To be fair, this change (or the rules governing it) will be communicated to some external judge before the actual "real play". But it will not be communicated to player (or obviously it would break any calibration accuracy). A possible rule could be to start the real game when some fixed accuracy is reached...
I do not see your reasoning here ? What I'm proposing is not letting know when practising round stops and real round starts. That means indeed that one boxer would get higher rewards in both practice and real round, and that's why I believe it's an argument for one boxing.
My proposal for "simulating" Newcomb's may not be accurate (and it's certainly not perfect) but you can't conclude that based on the (projected) outcome of the experiment disagreeing with wath you expect.
So, what is wrong believing in probabilities ? To ask that question is already to presuppose the one-boxing answer, and to miss the problem that the problem itself may be problematic.
That is going for the third option and dodging to point out exactly why the problem should not be well posed. I can write a program working as the Newcomb's problem is described if I go for the "unperfect predictor" version where the being is merely right "most of the time". A way to do it could be to let player run a number of practice (or calibrat...
Differing outcomes are a problem by themselve. Either one reasoning is right and the others are wrong, or basic logic is broken (and it would follow all maths are broken). It could also be that some hypothesis absolutely necessary for one reasoning or the other are implicit and untelled.
This is why, even if to me Newcomb is not a problem, it is still critical to find where other's reasoning are either broken or which assumptions are hidden. Failure to exhibit any error in someone else reasoning would lead to conclude that either my reasoning is broken (an...
Remind's me of this one from Terry Pratchett:
"All you get if you are good at digging holes it's a bigger shovel."
I have more or less the same point of view and applied it to non iterated prisonner's dilemma (as Newcomb's is merely half a Prisonner's Dilemma as David Lewis suggested in an article, and on this I agree with him, but not on his conclusion).
What is at stakes here (in Newcomb's or PD) may not be that easy to accept anyway. It's probability and Bayes against causality. The doom loop in Newcomb's (reasoning loop leading to loose 1 million, as I see it) is stating that The content of the boxes is already put when you play, henceforth you action won't change a...
I'm certainly cynical, but I see the point complaining about the drinks.
Not all airplane tickets are selled the same price. But basically everybody in the plane get the same share of progress, science, technology and man labour and sweat.
Henceforth how to account for the princing difference ?
The drinks, people.
Why not put some figures on 'identicality' of the players and see what comes out ?
A simple way is to consider the probability P that both players will play the same move. That's a simple mesure of how similar both players are.
Remember I am not stating that there is any causal dependency between players (it's forbidden by the rules):
A and B could be twins raised in a tight familly
A and B could be one unique person asked to play against several unknown opponents and not knowing he is playing agaisnt herself (experimental psychologist can be quite perver
Thanks for fixing my broken english.
There is actually several quotes expressing the same idea in different Terry Pratchett's book. Everyone of them much better than what I could remember. I dug these two ones:
In Wyrd Sisters you have (Granny Weatherwas speaking): “The reward you get for digging holes is a bigger shovel.”
And another one from "Carpe Jugulum" that I like even better (also Granny Weatherwax speaking): "The reward for toil had been more toil. If you dug the best ditches, they gave you a bigger shovel."