I guess my position is thus:
While there are sets of probabilities which by themselves are not adequate to capture the information about a decision, there always is a set of probabilities which is adequate to capture the information about a decision.
In that sense I do not see your article as an argument against using probabilities to represent decision information, but rather a reminder to use the correct set of probabilities.
I don't think it's correct to equate probability with expected utility, as you seem to do here. The probability of a payout is the same in the two situations. The point of this example is that the probability of a particular event does not determine the optimal strategy. Because utility is dependent on your strategy, that also differs.
Hmmm. I was equating them as part of the standard technique of calculating the probability of outcomes from your actions, and then from there multiplying by the utilities of the outcomes and summing to find the expected u...
The subtlety is about what numerical data can formally represent your full state of knowledge. The claim is that a mere probability of getting the $2 payout does not.
However, a single probability for each outcome given each strategy is all the information needed. The problem is not with using single probabilities to represent knowledge about the world, it's the straw math that was used to represent the technique. To me, this reasoning is equivalent to the following:
"You work at a store where management is highly disorganized. Although they pr...
The exposition of meta-probability is well done, and shows an interesting way of examining and evaluating scenarios. However, I would take issue with the first section of this article in which you establish single probability (expected utility) calculations as insufficient for the problem, and present meta-probability as the solution.
In particular, you say
...What’s interesting is that, when you have to decide whether or not to gamble your first coin, the probability is exactly the same in the two cases (p=0.45 of a $2 payout). However, the rational course
It's also irrelevant to the point I was making. You can point to different studies giving different percentages, but however you slice it a significant portion of the men she interacts with would have sex with her if she offered. So maybe 75% is only true for a certain demographic, but replace it with 10% for another demographic and it doesn't make a difference.
I was reading a lesswrong post and I found this paragraph which lines up with what I was trying to say
Some boxes you really can't think outside. If our universe really is Turing computable, we will never be able to concretely envision anything that isn't Turing-computable—no matter how many levels of halting oracle hierarchy our mathematicians can talk about, we won't be able to predict what a halting oracle would actually say, in such fashion as to experimentally discriminate it from merely computable reasoning.
Analysis of the survey results seems to indicate that I was correct: http://lesswrong.com/lw/fp5/2012_survey_results/
Yes, I agree. I can imagine some reasoning being concieving of things that are trans-turing complete, but I don't see how I could make an AI do so.
As mentioned below, we you'd need to make infinitely many queries to the Turing oracle. But even if you could, that wouldn't make a difference.
Again, even if there was a module to do infinitely many computations, the code I wrote still couldn't tell the difference between that being the case, and this module being a really good computable approximation of one. Again, it all comes back to the fact that I am programming my AI on a turing complete computer. Unless I somehow (personally) develop the skills to program trans-turing-complete computers, then wh...
I don't see how this changes the possible sense-data our AI could expect. Again, what's the difference between infinitely many computations being performed in finite time and only the computations numbered up to a point too large for the AI to query being calculated?
If you can give me an example of a universe for which the closest turing machine model will not give indistinguishable sense-data to the AI, then perhaps this conversation can progress.
Even if the world weren't computable, any non-computable model would be useless to our AI, and the best it could do is a computable approximation.
Again, what distinguishes a "turing oracle" from a finite oracle with a bound well above the realizable size of a computer in the universe? They are indistinguishable hypotheses. Giving a turing complete AI a turing oracle doesn't make it capable of understanding anything more than turing complete models. The turing-transcendant part must be an integral part of the AI for it to have non-turing-com...
Well I suppose starting with the assumption that my superintelligent AI is merely turing complete, I think that we can only say our AI has "hypothesis about the world" if it has a computable model of the world. Even if the world weren't computable, any non-computable model would be useless to our AI, and the best it could do is a computable approximation. Stable time loops seem computable through enumeration as you show in the post.
Now, if you claim that my assumption that the AI is computable is flawed, well then I give up. I truly have no idea how to program an AI more powerful than turing complete.
If you don't spend two months salary on a diamond ring, it doesn't mean you don't love your Significant Other. ("De Beers: It's Just A Rock.") But conversely, if you're always reluctant to spend any money on your SO, and yet seem to have no emotional problems with spending $1000 on a flat-screen TV, then yes, this does say something about your relative values.
I disagree, or at least the way it's phrased is misleading. The obvious completion of the pattern is that you care more about a flat screen TV than your SO. But that's not a valid com...
From what I could read on the iqtest page, it seemed that they didn't do any correction for self-selection bias, but rather calculated scores as if they had a representative sample. Based on this I would guess that the internet IQ test will underestimate your score (p=0.7)
Luckily it will remain possible for everyone to do so for the foreseeable future.
Thanks for this. Although I don't suffer from depression, the comments about meta-suffering really resonate with me. I think (this is unverified as of yet) that my life can be improved by getting rid of meta-suffering.
I certainly wouldn't pay that cent if there was an option of preventing 50 years of torture using that cent. There's nothing to say that my utility function can't take values in the surreals.
I'll make sure to keep you away from my body if I ever enter a coma...
Oh don't worry, there will always be those little lapses in awareness. Even supposing you hide yourself at night, are you sure you maintain your sentience while awake? Ever closed your eyes and relaxed, felt the cool breeze, and for a moment, forgot you were aware of being aware of yourself?
So what did you guess then?
Or maybe that's what I want you to think I'd say...
Hey everyone, I just voted, and so I can see the correct answer. The average is 19.2, so you should choose 17%!
Perhaps I am just contrarian in nature, but I took issue with several parts of her reasoning.
"What you're saying is tantamount to saying that you want to fuck me. So why shouldn't I react with revulsion precisely as though you'd said the latter?"
The real question is why should she react with revulsion if he said he wanted to fuck her? The revulsion is a response to the tone of the message, not to the implications one can draw from it. After all, she can conclude with >75% certainty that any male wants to fuck her. Why doesn't she show r...
No, you can only get an answer up to the limit imposed by the fact that the coastline is actually composed of atoms. The fact that a coastline looks like a fractal is misleading. It makes us forget that just like everything else it's fundamentally discrete.
This has always bugged me as a case of especially sloppy extrapolation.
The island of knowledge is composed of atoms? The shoreline of wonder is not a fractal?
You're right, if the opponent is a TDT agent. I was assuming that the opponent was simply a prediction=>mixed strategy mapper. (In fact, I always thought that the strategy 51% one-box 49% two box would game the system, assuming that Omega just predicts the outcome which is most likely).
If the opponent is a TDT agent, then it becomes more complex, as in the OP. Just as above, you have to take the argmax over all possible y->x mappings, instead of simply taking the argmax over all outputs.
Putting it in that perspective, essentially in this case we ...
Well, it certainly will defect against any mixed strategy that is hard coded into the opponent’s source code. On the other hand, if the mixed strategy the opponent plays is dependent on what it predicts the TDT agent will play, then the TDT agent will figure out which outcome has a higher expected utility:
(I defect, Opponent runs "defection predicted" mixed strategy)
(I cooperate, Opponent runs "cooperation detected" mixed strategy)
Of course, this is still simplifying things a bit, since it assumes that the opponent can perfectly predic...
Okay, I completely understand that the Heisenberg Uncertainty principle is simply the manifestation of the fact that observations are fundamentally interactions.
However, I never thought of the uncertainty principle as the part of quantum mechanics that causes some interpretations to treat observers as special. I was always under the impression that it was quantum entanglement... I'm trying to imagine how a purely wave-function based interpretation of quantum entanglement would behave... what is the "interaction" that localizes the spin wavefunction, and why does it seem to act across distances faster than light? Please, someone help me out here.
Er, this is assuming that the information revealed is not intentionally misleading, correct? Because certainly you could give a TDT agent an extra option which would be rational to take on the basis of the information available to the agent, but which would still be rigged to be worse than all other options.
Or in other words, the TDT agent can never be aware of such a situation.
Isn't this an invalid comparison? If The Nation were writing for an audience of reader which only read The Nation, wouldn't it change what it prints? The point is these publications are fundamentally part of a discussion.
Imagine if I thought there were fewer insects on earth then you did, and we had a discussion. If you compare the naive person who reads only my lines vs the naive person who reads only your lines, your person ends up better off, because on the whole, there are indeed a very large number of insects on earth This will be the case regardl...
Here: http://lesswrong.com/lw/ua/the_level_above_mine/
I was going to go through quote by quote, but I realized I would be quoting the entire thing.
Basically:
A) You imply that you have enough brainpower to consider yourself to be approaching Jaynes's level. (approaching alluded to in several instances) B) You were surprised to discover you were not the smartest person Marcello knew. (or if you consider surprised too strong a word, compare your reaction to that of the merely very smart people I know, who would certainly not respond with "Darn"). C)...
To me the part that stands out the most is the computation of P() by the AI.
From this description, it seems that P is described as essentially omniscient. It knows the locations and velocity of every particle in the universe, and it has unlimited computational power. Regardless of whether possessing and computing with such information is possible, the AI will model P as being literally omni... (read more)