All of dranorter's Comments + Replies

I'm interested in what happens if individual agents A, B, C merely have a probability of cooperating given that their threshold is satisfied. So, consider the following assumptions.

The last assumption being simply that  is low enough. Given these assumptions, we have  via the same proof as in the post.

So for example if  are all greater than two thirds, there can be some nonzero  such that the agents will cooperate with probab... (read more)

3abramdemski
The interesting thing about this -- beyond showing that going probabilistic allows the handshake to work with somewhat unreliable bots -- is that proving ⊢□wE rather than ⊢E is a lot different. With ⊢E, we're like "And so Peano arithmetic (or whatever) proves they cooperate! We think Peano arithmetic is accurate about such matters, so, they actually cooperate."  With the conclusion ⊢□wE we're more like "So if the agent's probability estimates are any good, we should also expect them to cooperate" or something like that. The connection to them actually cooperating is looser.
dranorterΩ240

It doesn't seem quite right to say that the sensor readings are identical when the thief has full knowledge of the diamond. The sensor readings after tampering can be identical. But some sensor readings have caused the predictor to believe that the sensors would be tampered with by the thief. The problem is just that the predictor knows what signs to look for, and humans do not.

dranorterΩ110

It's worth noting that in the case of logical induction, there's a more fleshed-out story where the LI eventually has self-trust and can also come to believe probabilities produced by other LI processes. And, logical induction can come to trust outputs of other processes too. For LI, a "virtuous process" is basically one that satisfies the LI criterion, though of course it wouldn't switch to the new set of beliefs unless they were known products of a longer amount of thought, or had proven themselves superior in some other way.

4abramdemski
I don't think this is true. Two different logical inductors need not trust each other in general, even if one has had vastly longer to think, and so has developed "better" beliefs. They do have reason to trust each other eventually on empirical matters, IE, matters for which they get sufficient feedback. (I'm unfortunately relying an an unpublished theorem to assert that.) However, for undecidable sentences, I think there is no reason why one logical inductor should consider another to have "virtuous reasoning", even if the other has thought for much longer. What we can say is that a logical inductor eventually sees itself as reasoning virtuously. And, furthermore, that "itself" means as mathematically defined -- it does not similarly trust "whatever the computer I'm running on happens to believe tomorrow", since the computational process could be corrupted by e.g. a cosmic ray. But for both a human and for a logical inductor, the epistemic process involves an interaction with the environment. Humans engage in discussion, read literature, observe nature. Logical inductors get information from the deductive process, which it trusts to be a source of truth. What distinguishes corrupt environmental influences from non-corrupt ones?

It's easy to list flaws; for example the first paragraph admits a major flaw; and technically, if trust itself is a big part of what you value, then it could be crucially important to learn to "trust and think at the same time".

Are either of those the flaw he found?

What we have to go on are "fairly inexcusable" and "affects one of the conclusions". I'm not sure how to filter the claims into a set of more than one conclusion, since they circle around an idea which is supposed to be hard to put into words. Here's ... (read more)

I think there’s some looseness in the Mind Illuminated ontology around this point, but I would say: thinking involves attention on an abstract concept. When attention and/or awareness are on a thought, that’s metacognitive attention and/or awareness. For example, if I’m trying to work on an intellectual task but start thinking about food, my attention has moved from the task to food. Specifically my attention might be on a specific possibility for dinner, or on a set of possibilities. If I have no metacognitive awareness, then I’m lost in the thought; my attention is not on the thought, it’s on the food.

dranorterΩ110

The definition may not be principled, but there's something that feels a little bit right about it in context. There are various ways to "stay in the logical past" which seem similar in spirit to migueltorrescosta's remark, like calculating your opponent's exact behavior but refusing to look at certain aspects of it. The proposal, it seems, is to iterate already-iterated games by passing more limited information of some sort between the possibly-infinite sessions. (Both your and the opponent's memory gets limited.) But if we admit that Miguel's "iterated p

... (read more)
3abramdemski
I have been thinking a bit about evolutionarily stable equilibria, now. Two things seem interesting (perhaps only as analogies, not literal applications of the evolutionarily stable equilibria concept): * The motivation for evolutionary equilibria involves dumb selection, rather than rational reasoning. This cuts the tricky knots of recursion. It also makes the myopic learning, which only pays attention to how well things perform in of round, seem more reasonable. Perhaps there's something to be said about rational learning algorithms needing to cut the knots of recursion somehow, such that the evolutionary equilibrium concept holds a lesson for more reflective agents. * The idea of evolutionary stability is interesting because it mixes the game and the metagame together a little bit: the players should do what is good for them, but the resulting solution should also be self-enforcing, which means consideration is given to how the solution shapes the future dynamics of learning. This seems like a necessary feature of a solution.
dranorterΩ250

I think it's worth mentioning that part of the original appeal of the term (which made us initially wary) was the way it matches intuitively with the experience of signaling behavior. Here's the original motivating example. Imagine that you are in the Parfit's Hitchhiker scenario and Paul Ekman has already noticed that you're lying. What do you do? You try to get a second chance. But it won't be enough to simply re-state that you'll pay him. Even if he doesn't detect the lie this time around, you're the same person who had to lie only a moment ago. What ch

... (read more)
dranorterΩ230

What does it look like to rotate and then renormalize?

There seem to be two answers. The first answer is that the highest probability event is the one farthest to the right. This event must be the entire . All we do to renormalize is scale until this event is probability 1.

If we rotate until some probabilities are negative, and then renormalize in this way, the negative probabilities stay negative, but rescale.

The second way to renormalize is to choose a separating line, and use its normal vector as probability. This keeps probability positive. Then we fin... (read more)

Your assessment makes the assumption that the knowledge that we are missing is "not that important".

Better to call it a rational estimate than an assumption.

It is perfectly rational to say to onesself "but if I refuse to look into anything which takes a lot of effort to get any evidence for, then I will probably miss out." We can put math to that sentiment and use it to help decide how much time to spend investigating unlikely claims. Solutions along these lines are sometimes called "taking the outside view".

To my eyes yo

... (read more)
0Erfeyah
That is actually very clear :) Thanks. As I was saying to ProofOfLogic this post is about the identification of the difficult space which I think we are all in agreement. The way you explain it I see why you would suggest that choosing at random is the best rational strategy. I would prefer to explore associated topics in a different post so we keep this one self contained (and because I have to think about it!). Thanks for engaging!