All of Bunthut's Comments + Replies

The audible links don't function for me, propably not for many outside America.

>To show how weird English is: English is the only proto indo european language that doesn't think the moon is female ("la luna") and spoons are male (“der Löffel”).

In the most of those speeches, gender is downleadable from the form of the word, with outtakes naturally. In German it really makes neither semantic nor phonetic sense - secondlanguagelers often don't learn it at all, but here chaos shows no weakness: It is rather the strong verbs that are currently going lost, bu... (read more)

>Sentence lengths have declined.

Data: I looked for similar data on sentence lengths in german, and the first result I found covering a similar timeframe was wikipedia referencing Kurt Möslein: Einige Entwicklungstendenzen in der Syntax der wissenschaftlich-technischen Literatur seit dem Ende des 18. Jahrhunderts. (1974), which does not find the same trend:

Yearwps
177024,50
180025,54
185032,00
190023,58
192022,72
194019,60
196019,90

This data on scientific writing starts lower than any of your english examples from that time, and increases initially, but arrives... (read more)

6 picolightcones as well, don't think that changed.

Before logging in I had 200 LW-Bux, and 3 virtues. Now I have 50 LW and 8 virtues, and I didn't do anything. Whats that? Is there any explanation of how this stuff works?

9habryka
Huh, I currently have you in our database as having zero LW-Bux or virtues. We did some kind of hacky things to enable tracking state both while logged in and logged out, so there is a non-trivial chance I messed something up, though I did try all the basic things. Looking into it right now.

I think your disagreement can be made clear with more formalism. First, the point for your opponents:

When the animals are in a cold place, they are selected for a long fur coat, and also for IGF, (and other things as well). To some extent, these are just different ways of describing the same process. Now, if they move to a warmer place, they are now selected for a shorter fur instead, and they are still selected for IGF. And there's also a more concrete correspondence to this: they have also been selected for "IF cold long fur, ELSE short fur" the entire t... (read more)

for AIs, more robust adversarial examples - especially ones that work on AIs trained on different datasets - do seem to look more "reasonable" to humans.

Then I would expect they are also more objectively similar. In any case that finding is strong evidence against manipulative adversarial examples for humans - your argument is basically "there's just this huge mess of neurons, surely somewhere in there is a way", but if the same adversarial examples work on minds with very different architectures, then that's clearly not why they exist. Instead, they have ... (read more)

2the gears to ascension
I suppose that is what I said interpreted as a deductive claim. I have more abductive/bayesian/hunch information than that, I've expressed some of it, but I've been realizing lately a lot of my intuitions are not via deductive reasoning, which can make them hard to verify or communicate. (and I'd guess that that's a common problem, seems like the sort of thing science exists to solve.) I'm likely not well equipped to present justifiedly-convincing-to-highly-skeptical-careful-evaluator claims about this, just detailed sketches of hunches and how I got them. Your points about the limits of hypnosis seem reasonable. I agree that the foothold would only occur if the receiver is being "paid-in-dopamine"-or-something hard enough to want to become more obedient. We do seem to me to see that presented in the story - the kid being concerningly fascinated by the glitchers right off the bat as soon as they're presented. And for what it's worth, I think this is an exaggerated version of a thing we actually see on social media sometimes, though I'm kind of bored of this topic and would rather not expand on that deeply.

Ok, thats mostly what I've heard before. I'm skeptical because:

  1. If something like classical adversarial examples existed for humans, it likely wouldn't have the same effects on different people, or even just viewed from different angles, or maybe even in a different mood.
  2. No known adversarial examples of the kind you describe for humans. We could tell if we had found them because we have metrics of "looking similar" which are not based on our intuitive sense of similarity, like pixelwise differences and convolutions. All examples of "easily confused" images
... (read more)
3the gears to ascension
for AIs, more robust adversarial examples - especially ones that work on AIs trained on different datasets - do seem to look more "reasonable" to humans. The really obvious adversarial example of this kind in human is like, cults, or so - I don't really have another, though I do have examples that are like, on the edge of the cult pattern. It's not completely magic, it doesn't work on everyone, and it does seem like a core component of why people fall to it is something like a relaxed "control plane" that doesn't really try hard to avoid being crashed by it; combined with, it's attacking through somewhat native behaviors. But I think OP's story is a good presentation of this anyway, because the level of immunity you can reliably have to a really well optimized thing is likely going to be enough to maintain some sanity, but not enough to be zero affected by it. like, ultimately, light causes neural spikes. neural spikes can do all sorts of stuff. the robust paths through the brain are probably not qualitatively unfamiliar but can be hit pretty dang hard if you're good at it. and the behavior being described isn't "do anything of choosing" - it seems to just be "crash your brain and go on to crash as many others as possible", gene drive style. It doesn't seem obvious that the humans in the story are doomed as a species, even - but it's evolutionarily novel to encounter such a large jump in your adversary's ability to find the vulnerabilities that currently crash you. Hmm, perhaps the attackers would have been more effective if they were able to make, ehm, reproductively fit glitchers... Oh, something notable here - if you're not personally familiar with hypnosis, it might be harder to grok this. Hypnosis is totally a thing, my concise summary is it's "meditation towards obedience" - meditation where you intentionally put yourself in "fast path from hearing to action", ish. edit 3: never do hypnosis with someone you don't seriously trust, ie someone you've known for

I just thought through the causal graphs involved, there's probably enough bandwidth through vision into reliably redundant behavior to do this

Elaborate.

edit: putting the thing I was originally going to say back:

I meant that I think there's enough bandwidth available from vision into configuration of matter in the brain that a sufficiently powerful mind could find adversarial-example the human brain hard enough to implement the adversarial process in the brain, get it to persist persist in that brain, take control, and spread. We see weaker versions of this in advertising and memetics already, and it seems to be getting worse with social media - there are a few different strains, which generally aren't hig... (read more)

This isn't my area of expertise, but I think I have a sketch for a very simple weak proof:

The conjecture states that V runtime and  length are polynomial in C size, but leaves the constant open. Therefore a counterexample would have to be an infinite family of circuits satisfying P(C), with their corresponding  growing faster than polynomial. To prove the existence of such a counterexample, you would need a proof that each member of the family satisfies P(C). But that proof has finite length, and can be used as the  fo... (read more)

3Eric Neyman
That's an interesting point! I think it only applies to constructive proofs, though: you could imagine disproving the counterexample by showing that for every V, there is some circuit that satisfies P(C) but that V doesn't flag, without exhibiting a particular such circuit.

I think the solution to this is to add something to your wealth to account for inalienable human capital, and count costs only by how much you will actually be forced to pay. This is a good idea in general; else most people with student loans or a mortage are "in the red", and couldnt use this at all.

2kave
This doesn't play very well with fractional kelly though
2CronoDAS
Human capital is worth nothing after you die, though.

What are real numbers then? On the standard account, real numbers are equivalence classes of sequences of rationals, the finite diagonals being one such sequence. I mean, "Real numbers don't exist" is one way to avoid the diagonal argument, but I don't thinks that's what cubefox is going for.

3Said Achmiz
“Real numbers don’t exist” seems like a good solution to me.

The society’s stance towards crime- preventing it via the threat of punishment- is not what would work on smarter people

This is one of two claims here that I'm not convinced by. Informal disproof: If you are a smart individual in todays society, you shouldn't ignore threats of punishment, because it is in the states interest to follow through anyway, pour encourager les autres. If crime prevention is in peoples interest, intelligence monotonicity implies that a smart population should be able to make punishment work at least this well. Now I don't trust in... (read more)

3Mikhail Samin
If today's society consisted mostly of smart individuals, they would overthrow the government that does something unfair instead of giving in to its threats. Only if you're a kid who's playing with other human kids (which is the scenario described in the quoted text), and converging on fairness possibly includes getting some idea of how much effort various things take different people. If you're an actual grown-up (not that we have those) and you're playing with aliens, you probably don't update, and you certainly don't update in the direction of anything asymmetric.

Maybe I'm missing something, but it seems to me that all of this is straightforwardly justified through simple selfish pareto-improvements.

Take a look at Critchs cake-splitting example in section 3.5. Now imagine varying the utility of splitting. How high does it need to get, before [red->Alice;green->Bob] is no longer a pareto improvement over [(split)] from both player's selfish perspective before the observation? It's 27, and thats also exactly where the decision flips when weighing Alice 0.9 and Bob 0.1 in red, and Alice 0.1 and Bob 0.9 in green.... (read more)

3abramdemski
I think you are right, I was confused. A pareto-improvement bet is necessarily one which all parties would selfishly consent to (at least not actively object to). 

The timescale for improvement is dreadfully long and the day-to-day changes are imperceptible.

 

This sounded wrong, but I guess is technically true? I had great in-session improvements as I'm warming up the area and getting into it, and the difference between a session where I missed the previous day, and one where I didn't, is absolutely preceptible. Now after that initial boost, it's true that I couldn't tell if the "high point" was improving day to day, but that was never a concern - the above was enough to give me confidence. Plus with your external rotations, was there not perceptible strength improvement week to week?

So I've reread your section on this, and I think I follow that, but its arguing a different claim. In the post, you argue that a trader that correctly identifies a fixed point, but doesn't have enough weight to get it played, might not profit from this knowledge. That I agree with.

But now you're saying that even if you do play the new fixed point, that trader still won't gain?

I'm not really calling this a proof because it's so basic that something else must have gone wrong, but:

 has a fixed point at , and  doesn't. Then... (read more)

On reflection, I didn't quite understand this exploration business, but I think I can save a lot of it.

>You can do exploration, but the problem is that (unless you explore into non-fixed-point regions, violating epistemic constraints) your exploration can never confirm the existence of a fixed point which you didn't previously believe in.

I think the key here is in the word "confirm". Its true that unless you believe p is a fixed point, you can't just try out p and see the result. However, you can change your beliefs about p based on your results from ex... (read more)

4abramdemski
This is the fundamental obstacle according to me,  so, unfortunate that I haven't successfully communicated this yet.  Perhaps I could suggest that you try to prove your intuition here? 

I don't think the learnability issues are really a problem. I mean, if doing a handstand with a burning 100 riyal bill between your toes under the full moon is an exception to all physical laws and actually creates utopia immediately, I'll never find out either. Assuming you agree that that's not a problem, why is the scenario you illustrate? In both cases, it's not like you can't find out, you just don't, because you stick to what you believe is the optimal action.

I don't think this would be a significant problem in practice any more than other kinds of h... (read more)

2abramdemski
You can do exploration, but the problem is that (unless you explore into non-fixed-point regions, violating epistemic constraints) your exploration can never confirm the existence of a fixed point which you didn't previously believe in. However, I agree that the situation is analogous to the handstand example, assuming it's true that you'd never try the handstand. My sense is that the difficulties I describe here are "just the way it is" and only count against FixDT in the sense that we'd be happier with FixDT if somehow these difficulties weren't present.  I think your idea for how to find repulsive fixed-points could work if there's a trader who can guess the location of the repulsive point exactly rather than approximately, and has the wealth to precisely enforce that belief on the market. However, the wealth of that trader will act like a martingale; there's no reliable profit to be made (even on average) by enforcing this fixed point. Therefore, such a trader will go broke eventually. On the other hand, attractive fixed points allow profit to be made (on average) by approximately guessing their locations. Repulsive points effectively "drain willpower".

That prediction may be true. My argument is that "I know this by introspection" (or, introspection-and-generalization-to-others) is insufficient. For a concrete example, consider your 5-year-old self. I remember some pretty definite beliefs I had about my future self that turned out wrong, and if I ask myself how aligned I am with it I don't even know how to answer, he just seems way too confused and incoherent.

I think it's also not absurd that you do have perfect caring in the sense relevant to the argument. This does not require that you don't make mista... (read more)

BunthutΩ22-2

This prediction seems flatly wrong: I wouldn’t bring about an outcome like that. Why do I believe that? Because I have reasonably high-fidelity access to my own policy, via imagining myself in the relevant situations.

This seems like you're confusing two things here, because the thing you would want is not knowable by introspection. What I think you're introspecting is that if you'd noticed that the-thing-you-pursued-so-far was different from what your brother actually wants, you'd do what he actually wants. But the-thing-you-pursued-so-far doesn't play the... (read more)

2TurnTrout
I want to know whether, as a matter of falsifiable fact, I would enact good outcomes by my brother's values were I very powerful and smart. You seem to be sympathetic to the falsifiable-in-principle prediction that, no, I would not. (Is that true?) Anyways, I don't really buy this counterargument, but we can consider the following variant (from footnote 2):  "True" values: My own (which I have access to) "Proxy" values: My brother's model of my values (I have a model of his model of my values, as part of the package deal by which I have a model of him) I still predict that he would bring about a good future by my values. Unless you think my predictive model is wrong? I could ask him to introspect on this scenario and get evidence about what he would do? 

The idea is that we can break any decision problem down by cases (like "insofar as the predictor is accurate, ..." and "insofar as the predictor is inaccurate, ...") and that all the competing decision theories (CDT, EDT, LDT) agree about how to aggregate cases.

Doesn't this also require that all the decision theories agree that the conditioning fact is independent of your decision?

Otherwise you could break down the normal prisoners dilemma into "insofar as the opponent makes the same move as me" and "insofar as the opponent makes the opposite move" and con... (read more)

Would a decision theory like this count as "giving up on probabilities" in the sense in which you mean it here?

I think your assessments of whats psychologically realistic are off.

I do not know what it feels like from the inside to feel like a pronoun is attached to something in your head much more firmly than "doesn't look like an Oliver" is attached to something in your head.

I think before writing that, Yud imagined calling [unambiguously gendered friend] either pronoun, and asked himself if it felt wrong, and found that it didn't. This seems realistic to me: I've experienced my emotional introspection becoming blank on topics I've put a lot of thinking into. This... (read more)

I don't think the analogy to biological brains is quite as strong. For example, biological brains need to be "robust" not only to variations in the input, but also in a literal sense, to forceful impact or to parasites trying to control it. It intentionally has very bad suppressability, and this means there needs to be a lot of redundancy, which makes "just stick an electrode in that area" work. More generally, it is under many constraints that a ML system isn't, probably too many for us to think of, and it generally prioritizes safety over performance. Bo... (read more)

2Quintin Pope
Firstly, thank you for your comment. I'm always glad to have good faith engagement on this topic. However, I think you're assuming the worst case scenario in regards to interpretability. Artificial networks are often trained with randomness applied to their internal states (dropout, gradient noise, etc). These seem like they'd cause more internal disruption (and are FAR more common) than occasional impacts. Evolved resistance to parasite control seems like it should decrease interpretability, if anything. E.g., having multiple reward centers that are easily activated is a terrible idea for resisting parasite control. And yet, the brain does it anyways. No it doesn't. One of my own examples of brain interpretability was: Which is a type of suppressability. Various forms of amnesia, aphasia and other perceptual blocks are other examples of suppressability in one form or another. So more reliable ML systems will be more interpretable? Seems like a reason for optimism. Yes, it would be terrible. I think you've identified a major reason why larger neural nets learn more quickly than smaller nets. The entire point of training neural nets is that you constantly change each part of the net to better process the data. The internal representations of different circuits have to be flexible, robust and mutually interpretable to other circuits. Otherwise, the circuits won't be able to co-develop quickly. One part of Knowledge Neurons in Pretrained Transformers I didn't include (but probably should have) is the fact that transformers re-use their input embedding in their internal knowledge representations. I.e., the feed forward layers that push the network to output target tokens represent their target token using the input embeddings of the target tokens in question. This would be very surprising if you thought that the network was just trying to represent its internal states using the shortest possible encoding. However, it's very much what you'd expect if you thought

Probably way too old here, but I had multible experiences relevant to the thread.

Once I had a dream and then, in the dream, I remembered I had dreamt this exact thing before, and wondered if I was dreaming now, and everything looked so real and vivid that I concluded I was not.

I can create a kind of half-dream, where I see random images and moving sequences at most 3 seconds or so long, in succession. I am really dimmed but not sleeping, and I am aware in the back of my head that they are only schematic and vague.

I would say the backstories in dreams are d... (read more)

I think its still possible to have a scenario like this. Lets say each trader would buy or sell a certain amount when the price is below/above what they think it to be, but the transition being very steep instead of instant. Then you could still have long price intervalls where the amounts bought and sold remain constant, and then every point in there could be the market price.

I'm not sure if this is significant. I see no reason to set the traders up this way other than the result in the particular scenario that kicked this off, and adding traders who don'... (read more)

So I'm not sure what's going on with my mental sim. Maybe I just have a super-broad 'crypto-moral detector' that goes off way more often than yours (w/o explicitly labeling things as crypto-moral for me).

Maybe. How were your intuitions before you encountered LW? If you already had a hypocrisy intuition, then trying to internalize the rationalist perspective might have lead it to ignore the morality-distinction.

My father playing golf with me today, telling me to lean down more to stop them going out left so much.

2abramdemski
Ok. My mental sim doesn't expect any backlash in this type of situation. My first thought is it's just super obvious why the advice might apply to you and not to him; but, this doesn't really seem correct. For one thing, it might not be super obvious. For another, I think there are cases where it's pretty obvious, but I nonetheless anticipate a backlash. So I'm not sure what's going on with my mental sim. Maybe I just have a super-broad 'crypto-moral detector' that goes off way more often than yours (w/o explicitly labeling things as crypto-moral for me).

I don't strongly relate to any of these descriptions. I can say that I don't feel like I have to pretend advice from equals is more helpful than it is, which I suppose means its not face. The most common way to reject advice is a comment like "eh, whatever" and ignoring it. Some nerds get really mad at this and seem to demand intellectual debate. This is not well received. Most people give advice with the expectation of intellectual debate only on crypto-moral topics (this is also not well received generally, but the speaker seems to accept that as an "identity cost"), or not at all.

You mean advice to diet, or "technical" advice once its established that person wants to diet? I don't have experience with either, but the first is definitely crypto-moral.

2abramdemski
What's definitely not crypto-moral?

This excludes worlds which the deductive process has ruled out, so for example if  has been proved, all worlds will have either A or B. So if you had a bet which would pay $10 on A, and a bet which would pay $2 on B, you're treated as if you have $2 to spend.

I agree you can arbitrage inconsistencies this way, but it seems very questionable. For one, it means the market maker needs to interpret the output of the deductive process semantically. And it makes him go bankrupt if that logic is inconsistent. And there could be a case where a proposit... (read more)

Why is the price of the un-actualized bet constant? My argument in the OP was to suppose that PCH is the dominant hypothesis, so, mostly controls market prices.

Thinking about this in detail, it seems like what influence traders have on the market price depends on a lot more of their inner workings than just their beliefs. I was thinking in a way where each trader only had one price for the bet, below which they bought and above which they sold, no matter how many units they traded (this might contradict "continuous trading strategies" because of finite wea... (read more)

2abramdemski
The continuity property is really important.

But now, you seem to be complaining that a method that explicitly avoids Troll Bridge would be too restrictive?

No, I think finding such a no-learning-needed method would be great. It just means your learning-based approach wouldn't be needed.

You seem to be arguing that being susceptible to Troll Bridge should be judged as a necessary/positive trait of a decision theory.

No. I'm saying if our "good" reasoning can't tell us where in Troll Bridge the mistake is, then something that learns to make "good" inferences would have to fall for it.

But there are decisi

... (read more)

So I don't see how we can be sure that PCH loses out overall. LCH has to exploit PCH -- but if LCH tries it, then we're seemingly in a situation where LCH has to sell for PCH's prices, in which case it suffers the loss I described in the OP.

So I've reread the logical induction paper for this, and I'm not sure I understand exploitation. Under 3.5, it says:

On each day, the reasoner receives 50¢ from T, but after day t, the reasoner must pay $1 every day thereafter.

So this sounds like before day t, T buys a share every day, and those shares never pay out - ot... (read more)

2abramdemski
Again, my view may have drifted a bit from the LI paper, but the way I think about this is that the market maker looks at the minimum amount of money a trader has "in any world" (in the sense described in my other comment). This excludes worlds which the deductive process has ruled out, so for example if A∨B has been proved, all worlds will have either A or B. So if you had a bet which would pay $10 on A, and a bet which would pay $2 on B, you're treated as if you have $2 to spend. It's like a bookie allowing a gambler to make a bet without putting down the money because the bookie knows the gambler is "good for it" (the gambler will definitely be able to pay later, based on the bets the gambler already has, combined with the logical information we now know). Of course, because logical bets don't necessarily ever pay out, the market maker realistically shouldn't expect that traders are necessarily "good for it". But doing so allows traders to arbitrage logically contradictory beliefs, so, it's nice for our purposes. (You could say this is a difference between an ideal prediction market and a mere betting market; a prediction market should allow arbitrage of inconsistency in this way.)
2abramdemski
Hm. It's a bit complicated and there are several possible ways to set things up. Reading that paragraph, I'm not sure about this sentence either. In the version I was trying to explain, where traders are "forced to sell" every morning before the day of trading begins, the reasoner would receive 50¢ from the trader every day, but would return that money next morning. Also, in the version I was describing, the reasoner is forced to set the price to $1 rather than 50¢ as soon as the deductive process proves 1+1=2. So, that morning, the reasoner has to return $1 rather than 50¢. That's where the reasoner loses money to the trader. After that, the price is $1 forever, so the trader would just be paying $1 every day and getting that $1 back the next morning. I would then define exploitation as "the trader's total wealth (across different times) has no upper bound". (It doesn't necessarily escape to infinity -- it might oscillate up and down, but with higher and higher peaks.) Now, the LI paper uses a different definition of exploitation, which involves how much money a trader has within a world (which basically means we imagine the deductive process decides all the sentences, and we ask how much money the trader would have; and, we consider all the different ways the deductive process could do this). This is not equivalent to my definition of exploitation in general; according to the LI paper, a trader 'exploits' the market even if its wealth is unbounded only in some very specific world (eg, where a specific sequence of in-fact-undecidable sentences gets proved). However, I do have an unpublished proof that the two definitions of exploitation are equivalent for the logical induction algorithm and for a larger class of "reasonable" logical inductors. This is a non-trivial result, but, justifies using my definition of exploitation (which I personally find a lot more intuitive). My basic intuition for the result is: if you don't know the future, the only way to ensure y
2abramdemski
Now I feel like you're trying to have it both ways; earlier you raised the concern that a proposal which doesn't overtly respect logic could nonetheless learn a sort of logic internally, which could then be susceptible to Troll Bridge. I took this as a call for an explicit method of avoiding Troll Bridge, rather than merely making it possible with the right prior. But now, you seem to be complaining that a method that explicitly avoids Troll Bridge would be too restrictive? I think there is a mistake somewhere in the chain of inference from cross→−10 to low expected value for crossing. Material implication is being conflated with counterfactual implication. A strong candidate from my perspective is the inference from ¬(A∧B) to C(A|B)=0 where C represents probabilistic/counterfactual conditional (whatever we are using to generate expectations for actions). You seem to be arguing that being susceptible to Troll Bridge should be judged as a necessary/positive trait of a decision theory. But there are decision theories which don't have this property, such as regular CDT, or TDT (depending on the logical-causality graph). Are you saying that those are all necessarily wrong, due to this? I'm not sure quite what you meant by this. For example, I could have a lot of prior mass on "crossing gives me +10, not crossing gives me 0". Then my +10 hypothesis would only be confirmed by experience. I could reason using counterfactuals, so that the troll bridge argument doesn't come in and ruin things. So, there is definitely a way. And being born with this prior doesn't seem like some kind of misunderstanding/delusion about the world. So it also seems natural to try and design agents which reliably learn this, if they have repeated experience with Troll Bridge.

So your social experience is different in this respect?

I've never experienced this example in particular, but I would not expect such a backlash. Can you think of another scenario with non-moral advice that I have likely experienced?

2abramdemski
Can you tell me anything about the "advice culture" you have experience with? For example, I've had some experience with Iranian culture, and it is very different from American culture. It's much more combative (in the sense of combat vs nurture, not necessarily real combativeness -- although I think they have a higher preference/tolerance for heated arguments as well). I was told several times that the bad thing about american culture is that if someone has a problem with you they won't tell you to your face, instead they'll still try to be nice. I sometimes found the blunt advice (criticism) from Iranians overwhelming and emotionally difficult to handle.
2abramdemski
Diet advice?

It seems to me that this habit is universal in American culture, and I'd be surprised (and intrigued!) to hear about any culture where it isn't.

I live in Austria. I would say we do have norms against hypocrisy, but your example with the drivers license seems absurd to me. I would be surprised (and intrigued!) if agreement with this one in particular is actually universal in American culture. In my experience, hypocrisy norms are for moral and crypto-moral topics.

For normies, morality is an imposition. Telling them of new moral requirements increases how mu... (read more)

2abramdemski
My current take is that anti-hypocrisy norms naturally emerge from micro status battles: giving advice naturally has a little undercurrent of "I'm smarter than you", and pointing out that the person is not following their own advice counters this. Therefore, a hypocrisy check naturally becomes a common response, because it's a pretty good move in status games. Therefore, people expect a hypocrisy check, and check themselves. On the one hand, I was probably blind to the moral aspect and over-generalized to some extent. On the other hand, do you really imagine me telling someone they should get a driver's license (in a context where there is common knowledge that I don't have one), and not expect a mild backlash? I expect phrases like "look who's talking" and I expect the 'energy in the room' after the backlash to be as if my point was refuted. I expect to have to reiterate the point, to show that I'm undeterred, if I still want it to be considered seriously in the conversation. (Particularly if the group isn't rationalists.) So your social experience is different in this respect?

The payoff for 2-boxing is dependent on beliefs after 1-boxing because all share prices update every market day and the "payout" for a share is essentially what you can sell it for.

If a sentence is undecidable, then you could have two traders who disagree on its value indefinitely: one would have a highest price to buy, thats below the others lowest price to sell. But then anything between those two prices could be the "market price", in the classical supply and demand sense. If you say that the "payout" of a share is what you can sell it for... well, the ... (read more)

2abramdemski
This sounds like doing optimality results poorly. Unfortunately, there is a lot of that (EG how the different optimality notions for CDT and EDT don't help decide between them). In particular, the "don't be a stupid frequentist" move has blinded Bayesians (although frequentists have also been blinded in a different way). Solomonoff induction has a relatively good optimality notion (that it doesn't do too much worse than any computable prediction). AIXI has a relatively poor one (you only guarantee that you take the subjectively best action according to Solomonoff induction; but this is hardly any guarantee at all in terms of reward gained, which is supposed to be the objective). (There are variants of AIXI which have other optimality guarantees, but none very compelling afaik.) An example of a less trivial optimality notion is the infrabayes idea, where if the world fits within the constraints of one of your partial hypotheses, then you will eventually learn to do at least as well (reward-wise) as that hypothesis implies you can do.
2abramdemski
Hmm. Well, I didn't really try to prove that 'physical causation' would persist as a hypothesis. I just tried to show that it wouldn't, and failed. If you're right, that'd be great! But here is what I am thinking: Firstly, yes, there is a market maker. You can think of the market maker as setting the price exactly where buys and sells balance; both sides stand to win the same amount if they're correct, because that amount is just the combined amount they've spent. Causality is a little funky because of fixed point stuff, but rather than imagining the traders hold shares for a long time, we can instead imagine that today's shares "pay out" overnight (at the next day's prices), and then traders have to re-invest if they still want to hold a position. (But this is fine, because they got paid the next day's prices, so they can afford to buy the same number of shares as they had.) But if the two traders don't reinvest, then tomorrow's prices (and therefore their profits) are up to the whims of the rest of the market. So I don't see how we can be sure that PCH loses out overall. LCH has to exploit PCH -- but if LCH tries it, then we're seemingly in a situation where LCH has to sell for PCH's prices, in which case it suffers the loss I described in the OP. Thanks for raising the question, though! It would be very interesting if PCH actually could not maintain its position. I have been thinking a bit more about this. I think it should roughly work like this: you have a 'conditional contract', which is like normal conditional bets, except normally a conditional bet (a|b) is made up of a conjunction bet (a&b) and a hedge on the negation of the condition (not-b); the 'conditional contract' instead gives the trader an inseparable pair of contracts (the a&x bet bound together with the not-b bet). Normally, the price of anything that's proved goes to one quickly (and zero for anything refuted), because traders are getting $1 per share (and $0 per share for what's been re

Because we have a “basic counterfactual” proposition for what would happen if we 1-box and what would happen if we 2-box, and both of those propositions stick around, LCH’s bets about what happens in either case both matter. This is unlike conditional bets, where if we 1-box, then bets conditional on 2-boxing disappear, refunded, as if they were never made in the first place.

I don't understand this part. Your explanation of PCDT at least didn't prepare me for it, it doesn't mention betting. And why is the payoff for the counterfactual-2-boxing determined b... (read more)

2abramdemski
Not sure how to best answer. I'm thinking of all this in an LIDT setting, so all learning occurs through traders making bets. The payoff for 2-boxing is dependent on beliefs after 1-boxing because all share prices update every market day and the "payout" for a share is essentially what you can sell it for. Similarly, if a trader buys a share of an undecidable sentence (let's say, the consistency of PA) then the only "payoff" is whatever you can sell it for later, based on future market prices, because the sentence will never get fully decided one way or the other. My claim is: eventually, if you observe enough cases of "crossing" in similar circumstances, your expectation for "cross" should be consistent with the empirical history (rather than, say, -10 even though you've never experienced -10 for crossing). To give a different example, I'm claiming it is irrational to persist in thinking 1-boxing gets you less money in expectation, if your empirical history continues to show that it is better on average. And I claim that if there is a persistent disagreement between counterfactuals and evidential conditionals, then the agent will in fact experimentally try crossing infinitely often, due to the value-of-information of testing the disagreement (that is, this will be the limiting behavior of reduced temporal discounting, under the assumption that the agent isn't worried about traps). So the two will indeed converge (under those assumptions). The hope is that we can block the troll argument completely if proving B->A does not imply cf(A|B)=1, because no matter what predicate the troll uses, the inference from P to cf fails. So what we concretely need to do is give a version of counterfactual reasoning which lets cf(A|B) not equal 1 in some cases where B->A is proved. Granted, there could be some other problematic argument. However, if my learning-theoretic ideas go through, this provides another safeguard: Troll Bridge is a case where the agent never learns the em

are the two players physically precisely the same (including environment), at least insofar as the players can tell?

In the examples I gave yes. Because thats the case where we have a guarantee of equal policy, from which people try to generalize. If we say players can see their number, then the twins in the prisoners dilemma needn't play the same way either.

But this is one reason why correlated equilibria are, usually, a better abstraction than Nash equilibria.

The "signals" players receive for correlated equilibria are already semantic. So I'm suspicious t... (read more)

2abramdemski
It's not something we would naively expect, but it does further speak in favor of CE, yes? In particular, if you look at those learnability results, it turns out that the "external signal" which the agents are using to correlate their actions is the play history itself. IE, they are only using information which must be available to learning agents (granted, sufficiently forgetful learning agents might forget the history; however, I do not think the learnability results actually rely on any detailed memory of the history -- the result still holds with very simple agents who only remember a few parameters, with no explicit episodic memory (unlike, eg, tit-for-tat).

Hum, then I'm not sure I understand in what way classical game theory is neater here?

Changing the labels doesn't make a difference classically.

As long as the probabilistic coin flips are independent on both sides

Yes.

Do you have examples of problems with copies that I could look at and that you think would be useful to study?

No, I think you should take the problems of distributed computing, and translate them into decision problems, that you then have a solution to.

Well, if I understand the post correctly, you're saying that these two problems are fundamentally the same problem

No. I think:

...the reasoning presented is correct in both cases, and the lesson here is for our expectations of rationality...

As outlined in the last paragraph of the post. I want to convince people that TDT-like decision theories won't give a "neat" game theory, by giving an example where they're even less neat than classical game theory.

Actually it could. 

I think you're thinking about a realistic case (same algorithm, similar environment... (read more)

2adamShimi
Hum, then I'm not sure I understand in what way classical game theory is neater here? As long as the probabilistic coin flips are independent on both sides (you also mention the case where they're symmetric, but let's put that aside for the example), then you can apply the basic probabilistic algorithm for leader election: both copies flip a coin n times to get a n-bit number, which they exchange. If the numbers are different, then the copy with the smallest one says 0 and the other says 1; otherwise they flip a coin and return the answer. With this algorithm, you have probability ≥1−12n of deciding different values, and so you can get as close as you want to 1 (by paying the price in more random bits). Do you have examples of problems with copies that I could look at and that you think would be useful to study?

The link would have been to better illustrate how the proposed system works, not about motivation. So, it seems that you understood the proposal, and wouldn't have needed it.

I don't exactly want to learn the cartesian boundary. A cartesian agent believes that its input set fully screens off any other influence on its thinking, and the outputs screen off any influence of the thinking on the world. Its very hard to find things that actually fulfill this. I explain how PDT can learn cartesian boundaries, if there are any, as a sanity/conservative extension check. But it can also learn that it controls copies or predictions of itself for example.

The apparent difference is based on the incoherent counterfactual "what if I say heads and my copy says tails"

I don't need counterfactuals like that to describe the game, only implications. If you say heads and your copy tails, you will get one util, just like how if 1+1=3, the circle can be squared.

The interesting thing here is that superrationality breaks up an equivalence class relative to classical game theory, and peoples intuitions don't seem to have incorporated this.

"The same" in what sense? Are you saying that what I described in the context of game theory is not surprising, or outlining a way to explain it in retrospect? 

Communication won't make a difference if you're playing with a copy.

2adamShimi
Well, if I understand the post correctly, you're saying that these two problems are fundamentally the same problem, and so rationality should be able to solve them both if it can solve one. I disagree with that, because from the perspective of distributed computing (which I'm used to), these two problems are exactly the two kinds of problems that are fundamentally distinct in a distributed setting: agreement and symmetry-breaking. Actually it could. Basically all of distributed computing assumes that every process is running the same algorithm, and you can solve symmetry-breaking in this case with communication and additional constraint on the scheduling of processes (the difficulty here is that the underlying graph is symmetric, whereas if you had some form of asymmetry (like three processes in a line, such that the one in the middle has two neighbors but the others only have one), they you can use directly that asymmetry to solve symmetry-breaking. (By the way, you just gave me the idea that maybe I can use my knowledge of distributed computing to look at the sort of decision problems where you play with copies? Don't know if it would be useful, but that's interesting at least)

What is and isn't an isomorphism depends on what you want to be preserved under isomorphism. If you want everything thats game-theoretically relevant to be preserved, then of course those games won't turn out equivalent. But that doesn't explain anything. If my argument had been that the correct action in the prisoners dilemma depends on sunspot activity, you could have written your comment just as well.

2Gurkenglas
It's easy to get confused between similar equivalence relations, so it's useful to formally distinguish them. See the other thread's arguing about sameness. Category theory language is relevant here because it gives a short description of your anomaly, so it may give you the tools to address it. And it is in fact unusual: For the cases of the underlying sets of a graph, group, ring, field, etc., one can find a morphism for every function. We can construct a similar anomaly for the case of rings by saying that every ring's underlying set contains 0 and 1, and that these are its respective neutral elements. Then a function that swaps 0 and 1 would have no corresponding ring morphism. The corresponding solution for your case would be to encode the structure not in the names of the elements of the underlying set, but in something that falls away when you go to the set. This structure would encode such knowledge as which decision is called heads and which tails. Then for any game and any function from its underlying set you could push the structure forward.

Right, but then, are all other variables unchanged? Or are they influenced somehow? The obvious proposal is EDT -- assume influence goes with correlation.

I'm not sure why you think there would be a decision theory in that as well. Obviously when BDT decides its output, it will have some theory about how its output nodes propagate. But the hypothesis as a whole doesn't think about influence. Its just a total probability distribution, and it includes that some things inside it are distributed according to BDT. It doesn't have beliefs about "if the output of ... (read more)

Adding other hypothesis doesn't fix the problem. For every hypothesis you can think of, theres a version of it that says "but I survive for sure" tacked on. This hypothesis can never lose evidence relative to the base version, but it can gain evidence anthropically. Eventually, these will get you. Yes, theres all sorts of considerations that are more relevant in a realistic scenario, thats not the point.

2ChristianKl
You don't need to add other hypothesis to know that there might be unknown additional hypothesis. 

The problem, as I understand it, is that there seem to be magical hypothesis you can't update against from ordinary observation, because by construction the only time they make a difference is in your odds of survival. So you can't update them from observation, and anthropics can only update in their favour, so eventually you end up believing one and then you die.

2Charlie Steiner
The amount that I care about this problem is proportional to the chance that I'll survive to have it.

Maybe the disagreement is in how we consider the alternative hypothesis to be? I'm not imagining a broken gun - you could examine your gun and notice it isn't, or just shoot into the air a few times and see it firing. But even after you eliminate all of those, theres still the hypothesis "I'm special for no discernible reason" (or is there?) that can only be tested anthropically, if at all. And this seems worrying.

Maybe heres a stronger way to formulate it: Consider all the copies of yourself across the multiverse. They will sometimes face situations where... (read more)

2Charlie Steiner
I think in the real world, I am actually accumulating evidence against magic faster than I am trying to commit elaborate suicide.

To clarify, do you think I was wrong to say UDT would play the game? I've read the two posts you linked. I think I understand Weis, and I think the UDT described there would play. I don't quite understand yours.

2Charlie Steiner
I agree with faul sname, ADifferentAnonymous, shminux, etc. If every single person in the world had to play russian roulette (1 bullet and 5 empty chambers), and the firing pin was broken on exactly one gun in the whole world, everyone except the person with the broken gun would be dead after about 125 trigger pulls. So if I remember being forced to pull the trigger 1000 times, and I'm still alive, it's vastly more likely that I'm the one human with the broken gun, or that I'm hallucinating, or something else, rather than me just getting lucky. Note that if you think you might be hallucinating, and you happen to be holding a gun, I recommend putting it down and going for a nap, not pulling the trigger in any way. But for the sake of argument we might suppose the only allowed hypotheses are "working gun" and "broken gun." Sure, if there are miraculous survivors, then they will erroneously think that they have the broken gun, in much the same way that if you flipped a coin 1000 times and just so happened to get all heads, you might start to think you had an unfair coin. We should not expect to be able to save this person. They are just doomed. It's like poker. I don't know if you've played poker, but you probably know that the basic idea is to make bets that you have the best hand. If you have 4 of a kind, that's an amazing hand, and you should be happy to make big bets. But it's still possible for your opponent to have a royal flush. If that's the case, you're doomed, and in fact when the opponent has a royal flush, 4 of a kind is almost the worst hand possible! It makes you think you can bet all your money when in fact you're about to lose it all. It's precisely the fact that four of a kind is a good hand almost all the time that makes it especially bad that remaining tiny amount of the time. The person who plays russian roulette and wins 1000 times with a working gun is just that poor sap who has four of a kind into a royal flush. (P.S.: My post is half explan
Load More