All of RocksBasil's Comments + Replies

I was thinking of iatrogenic transmissions, yeah (and prions have been a long term psychological fear of mine, too...so I perhaps crawled too much publicly available information about prions to be a normal person)

I wonder if there are any instances of FFI transmitted through the iatrogenic pathway, and whether it is possible to be distinguished from the typical CJD, and whether iatrogenic prions could become a significant issue for healthcare (more instances of prion diseases due to aging population could possibly mean more contaminated medical equipments,... (read more)

How much do we know about the presence of prion diseases in other animals we frequently consume? 

A quick search shows that even fish have some variant of the prion protein and so perhaps all vertebrates pose a theoretical risk of being the carrier of a prion disease, although the species barrier will likely be too high for prions of non-mammal origin. 

I'm quite concerned about pigs.

Apparently pigs are considered to be prion resistant as no naturally occurring prion diseases among pigs have been identified, but it is possible to infect them with s... (read more)

"Infectious" means "transmissible between people". As the name suggests, fatal familial insomnia is a genetic condition. (FFI and the others listed are also prion diseases - the prion just emerges on its own without a source prion and no part of the disease is contagious. This is an interesting trait of prions that could not happen with, say, a disease caused by a virus.)

Can someone catch FFI from coming into contact with the neural tissues of a patient with FFI? 

I suspect it's possible that FFI genes cause the patient's body to create prions, but can... (read more)

2eukaryote
Possibly if by "come in contact" we mean like ingesting or injecting or something. That's the going theory for how the Kuru epidemic started - consumption of the brain of a person with sporadic (randomly-naturally-occuring) CJD. Fortunately cannibalism isn't too common so this isn't a usual means of transmission. I think if anything less intensive (say, skin or saliva contact) made CJD transmissible, we would know by now. See also brain contact with contaminated materials e.g. iatrogenic CJD, or Alzheimers which I mention briefly in this piece. Yep! That's how it works. Real brutal.

I like your last point a lot, does it mean that governments/institutions are more interested in protecting the systems they are in than their constituents? It indeed seems possible and can explain this situation.

I still wonder if such thing happens on an individual level as well, which can help shed some light.

2Dagon
I think there's a bunch of subtlety in the causation, but yes, most political units act on self-preservation more fervently than pursuing their nominal goals. Another filter would be that those who control and benefit from the organization are acting in their interests, even when the non-powerful "members" are harmed.

My assumption is that promises are "vague", playing $99 or $100 both fulfil the promise of giving a high claim close to $100, for which there is no incentive to break.

I think the vagueness stops the race to the bottom in TD, compared to the dollar auction in which every bid can be outmatched by a tiny step without risking going overboard immediately.

I do think I overcomplicated the matter to avoid modifying the payoff matrix.

"breaking a promise" or "keeping a promise" has no intrinsic utilities here.

What I state is that under this formulation, if the other player believes your promise and plays the best response to your promise, your best response is to keep the promise.

2Dagon
What utility do you get from keeping the promise, and how does it outweigh an extra $1 from bidding $99 (and getting $101) instead of $100? If you're invoking Hofstadter's super-rationality (the idea that your keeping a promise is causally linked to the other person keeping theirs), fine. If you're acknowledging that you get outside-game utility from being a promise-keeper, also fine (but you've got a different payout structure than written). Otherwise, why are you giving up the $1? And if you are willing to go $99 to get another $1 payout, why isn't the other player (kind of an inverse super-rationality argument)?

" in this case, "trust" is equivalent to changing the payout structure to include points for self-image and social cohesion "

I guess I'm just trying to model trust in TD without changing the payoff matrix. The payoff matrix of the "vague" TD works in promoting trust--a player has no incentive breaking a promise.

2Dagon
You're just avoiding acknowledging the change in payoff matrix, not avoiding the change itself. If "breaking a promise" has a cost or "keeping a promise" has a benefit (even if it's only a brief good feeling), that's part of the utility calculation, and is part of the actual payoff matrix used for decision-making..

This is true. The issue is that the Nash Equilibrium formulation of TD predicts that everyone else will bid $2, which is counter-intuitive and does not confirm empirical findings.

I'm trying to convince myself that the NE formulation in TD is not entirely rational.

If Alice claims close to $100 (say, $80), Bob gets a higher payoff claiming $100 (getting $78) instead of claiming $2 (getting $4).

1Gurkenglas
Ohh, I thought it's 2$ per dollar of difference between them. Okay.

I would assume Kelvin users to outnumber Fahrenheit users on LW.

1TheWakalix
I'd assume the opposite, since I don't think physicists (and other thermodynamic scientists like some chemists) make up a majority of LW readers, but it's irrelevant. I can (and did) put both forms side-by-side to allow both physicists and non-physicists to better understand the magnitude of the temperature difference. (And since laymen are more likely to skim over the number and ignore the letter, it's disproportionately more important to include Fahrenheit.) Edit: wait, delta-K is equivalent to delta-C. In that case, since physicists ⋃ metric-users might make up the majority of LW readers, you're probably right about the number of users.

I think we should still keep b even with the iterations, since I made the assumption that "degrees of loyalty" is a property of S, not entirely the outcome of a rational-game-playing.

(I still assume S rational outside of having b in his payoffs)

Otherwise those kind of tests probably makes little sense.

I also wonder what happens if M doesn't know the repulsiveness of the test for certain, only a distribution of it (ie: CIA only knows that on average killing your spouse is pretty repulsive, except this lady here really hates her husband, oops)... (read more)

Thanks, I forgot the proof before replying your comment.

You are correct that in PD (D,C) is Pareto, and so the Nash Equilibrium (D,D) is much closer to a Pareto outcome than the Nash Equilibrium (0,0) of TD is to its Pareto outcomes (somewhere along each person getting a million pounds, give or take a cent)

It still strange to see a game with only one round and no collusion to land pretty close to the optimal, while its repeated version (dollar auction) seems to deviate badly from the Pareto outcome.

4Stuart_Armstrong
It is a bit strange. It seems this is because in the dollar auction, you can always make your position slightly better unilaterally, in a way that will make it worse once the other player reacts. Iterate enough, and all value is destroyed. But in a one-round game, you can't slide down that path, so you pick by looking at the overall picture.

Thanks, the final result is somewhat surprising, perhaps it's a quirk of my construction.

Setting r to be higher than v does remove the "undercover agents" that have practically 0 obedience, but I didn't know it's the optimal choice for M.

2Bucky
I wonder what would happen if one were to remove b and play the game iteratively. The game stops after 50 iterations or the first time S fails the test or defects. b is then essentially replaced by S’s expected payoff over the remaining iterations if he remains loyal. However M would know this value so the game might need further modification.

I think "everybody launches all nukes" might not be a Nash Equilibrium.

We can argue that once one side launched their nukes the other side does not necessarily have an incentive to retaliate, given they won't really care whether the enemy got nuked or not after they themselves are nuked, and they probably will have an incentive to not launch the nukes to prevent the "everybody dies" outcome, which can be argued to be negative for someone who is about to die.

1Jay Molstad
It seems to me that both parties to the Cold War favored the defect-defect outcome (launch all the nukes) over the cooperate-defect outcome (we die, they don't). It's hard to tell, though, because both sides had an incentive to signal that preference regardless of the truth. But that's an extreme case. Any war you choose will have each side choosing between continuing to fight and surrendering. The cooperate-cooperate outcome (making peace in a way that approximates the likely outcome of a war) is probably best for all, but it's hard to achieve in practice. And it seems to me that at least part of the problem is that, if one side chooses to cooperate (sue for peace and refrain from maximally fighting), they run the risk that the other side will continue to defect (fight) and seize an advantage.

I haven't found any information yet, but I suspect there is a mixed Nash somewhere in TD.

[This comment is no longer endorsed by its author]Reply
2Stuart_Armstrong
There is no mixed Nash equilibrium in the TD example above (see the proof above).

It is interesting that experimental results of traveller's dilemma seems to give results which deviate strongly from the Nash Equilibrium, and in fact quite close to the Pareto Optimal Solution.

This is pretty strange for a game that has only one round and no collusion (you'd expect it to end as Prisoner's Dilemma, no?)

It is rather different from what we would see from the dollar auction, which has no Nash Equilibrium but always deviate far away from the Pareto optimal solution.

I suspect that the this game being one round-only actually improv... (read more)

6Stuart_Armstrong
I think a key difference is that in PD, (Defect, Cooperate) is a Pareto outcome (you can't make it better for the cooperator without making it worse for the defector). While (0, 0) is far from the Pareto boundary. So people can clearly see that naming numbers around 0 is a massive loss, so they focus on avoiding that loss rather than optimising their game vs the other player.

I think there are economic factors under the play, although it will be more subtle than just a plain comparison of "alleged GDP per capita".

I recall that both China and the Middle East went through a process of "de-industrialisation“ from the European High Middle Ages to the Early Modern period. Essentially both China and the Middle East started substituting machines for simple human labour, causing cranes, water mills, etc to become rarer over time.

And strangely enough a study showed that when this was happening there was little difference... (read more)

This is an interesting study, it seems that his numbers are not too far off what I plugged in as a placeholder (that our current energy consumption is within a couple magnitudes from becoming climate altering)

Though I'm not making sense of the nanobots yet haha

Ah thanks, so the equilibrium is more robust than I initially assumed, didn't expect that to happen.

So the issue won't be as pressing as climate change could be, although some kind of ceiling still exists for energy consumption on Earth nevertheless...

Oh yes! This can make more sense now.

#humans has a decreasing marginal returns, since really the main concern for #humanity is the ability to recover, and that while increases with #humans it is not linear.

I do think individuals have "some" concerns about whether humanity in general will survive, since all humans still share *some* genes with each individual, the survival and propagation of strangers can still have some utility for a human individual (I'm not sure where am I going here...)

1TheWakalix
I agree that #humans has decreasing marginal returns at these scales - I meant linear in the asymptotic sense. (This is important because large numbers of possible future humans depend on humanity surviving today; if the world was going to end in a year then (a) would be better than (b). In other words, the point of recovering is to have lots of utility in the future.) I don't think most people care about their genes surviving into the far future. (If your reasoning is evolutionary, then read this if you haven't already.) I agree that many people care about the far future, though.

Ah, I never thought about this being a secretary problem.

Well, initially I used it as an analogy for evolution and didn't think too much about memorising/backtracking.

Oh wait, the mountaineer has memory about each peak he saw then he should go back to one of the high peaks he encountered before (assuming the flood hasn't moped the floor yet, which is a given since he is still exploring), there is probably no irrecoverable rejections here like in secretary problem.

The second choice is a strange one. I think the entire group taking the best chanc... (read more)

1TheWakalix
Epistemic status: elaborating on a topic by using math on it; making the implicit explicit From an collective standpoint, the utility function over #humans looks like this: it starts at 0 when there are 0 humans, slowly rises until it reaches "recolonization potential", then rapidly shoots up, eventually slowing down but still linear. However, from an individual standpoint, the utility function is just 0 for death, 1 for life. Because of the shape of the collective utility function, you want to "disentangle" deaths, but the individual doesn't have the same incentive.
4Dagon
The key is that "humanity" doesn't make decisions. Individuals do. The vast majority of individuals care more about themselves than about strangers, or about the statistical future masses. Public debate is mostly about signaling, so will be split between (a) and (b), depending on cultural/political affiliation. Actual behavior is generally selfish, so most will chose (a), maximizing their personal chances.

Yes, those two pieces can change the situation dramatically (and I have tried writing another parable including them, but found it a bit difficult for me)

I'm pondering about what is the best strategy with communication. Initially I thought I can spread them out and each mountaineer knows the location/height of other mountaineers in a given radius (significantly larger than the visibility in the fog) and add that information into their "move towards the greatest height" algorithm. Which might work, but I cannot vigorously show how useful that... (read more)

2Pattern
The environment may change over time, but 1) mountains change slowly, and 2) that's what brains are for. Even if "evolution doesn't pick up on it", how much will the height of a mountain (and which mountain is the tallest) naturally change over the course of your lifetime?