That's fair, yeah
We need a proper mathematical model to study this further. I expect it to be difficult to set up because the situation is so unrealistic/impossible as to be hard to model. But if you do have a model in mind I'll take a look
It would help to have a more formal model, but as far as I can tell the oracle can only narrow down its predictions of the future to the extent that those predictions are independent of the oracle's output. That is to say, if the people in the universe ignore what the oracle says, then the oracle can give an informative prediction.
This would seem to exactly rule out any type of signal which depends on the oracle's output, which is precisely the types of signals that nostalgebraist was concerned about.
The problem is that the act of leaving the message depends on the output of the oracle (otherwise you wouldn't need the oracle at all, but you also would not know how to leave a message). If the behavior of the machine depends on the oracle's actions, then we have to be careful with what the fixed point will be.
For example, if we try to fight the oracle and do the opposite, we get the "noise" situation from the grandfather paradox.
But if we try to cooperate with the oracle and do what it predicts, then there are many different fixed points and no telling which the oracle would choose (this is not specified in the setting).
It would be great to see a formal model of the situation. I think any model in which such message transmission would work is likely to require some heroic assumptions which don't correspond much to real life.
Thanks for the link to reflective oracles!
On the gap between the computable and uncomputable: It's not so bad to trifle a little. Diagonalization arguments can often be avoided with small changes to the setup, and a few of Paul's papers are about doing exactly this.
I strongly disagree with this: diagonalization arguments often cannot be avoided at all, not matter how you change the setup. This is what vexed logicians in the early 20th century: no matter how you change your formal system, you won't be able to avoid Godel's incompleteness theorems.
There is a trick that reliably gets you out of such paradoxes, however: switch to probabilistic mixtures. This is easily seen in a game setting: in rock-paper-scissors, there is no deterministic Nash equilibrium. Switch to mixed strategies, however, and suddenly there is always a Nash equilibrium.
This is the trick that Paul is using: he is switching from deterministic Turing machines to randomized ones. That's fine as far as it goes, but it has some weird side effects. One of them is that if a civilization is trying to predict the universal prior that is simulating itself, and tries to send a message, then it is likely that with "reflexive oracles" in place, the only message it can send is random noise. That is, Paul shows reflexive oracles exist in the same way that Nash equilibria exist; but there is no control over what the reflexive oracle actually is, and in paradoxical situations (like rock-paper-scissors) the Nash equilibrium is the boring "mix everything together uniformly".
The underlying issue is that a universe that can predict the universal prior, which in turn simulates the universe itself, can encounter a grandfather paradox. It can see its own future by looking at the simulation, and then it can do the opposite. The grandfather paradox is where the universe decides to kill the grandfather of a child that the simulation predicts.
Paul solves this by only letting it see its own future using a "reflexive oracle" which essentially finds a fixed point (which is a probability distribution). The fixed point of a grandfather paradox is something like "half the time the simulation shows the grandchild alive, causing the real universe to kill the grandfather; the other half the time, the simulation shows the grandfather dead and the grandchild not existing". Such a fixed point exists even when the universe tries to do the opposite of the prediction.
The thing is, this fixed point is boring! Repeat this enough times, and it eventually just says "well my prediction about your future is random noise that doesn't have to actually come true in your own future". I suspect that if you tried to send a message through the universal prior in this setting, the message would consist of essentially uniformly random bits. This would depend on the details of the setup, I guess.
I think the problem to grapple with is that I can cover the rationals in [0,1] with countably many intervals of total length only 1/2 (eg enumerate rationals in [0,1], and place interval of length 1/4 around first rational, interval of length 1/8 around the second, etc). This is not possible with reals -- that's the insight that makes measure theory work!
The covering means that the rationals in an interval cannot have a well defined length or measure which behaves reasonably under countable unions. This is a big barrier to doing probability theory. The same problem happens with ANY countable set -- the reals only avoid it by being uncountable.
Evan Morikawa?
https://twitter.com/E0M/status/1790814866695143696
Weirdly aggressive post.
I feel like maybe what's going on here is that you do not know what's in The Bell Curve, so you assume it is some maximally evil caricature? Whereas what's actually in the book is exactly Scott's position, the one you say is "his usual "learn to love scientific consensus" stance".
If you'd stop being weird about it for just a second, could you answer something for me? What is one (1) position that Murray holds about race/IQ and Scott doesn't? Just name a single one, I'll wait.
Or maybe what's going on here is that you have a strong "SCOTT GOOD" prior as well as a strong "MURRAY BAD" prior, and therefore anyone associating the two must be on an ugly smear campaign. But there's actually zero daylight between their stances and both of them know it!
Relatedly, if you cannot outright make a claim because it is potentially libellous, you shouldn't use vague insinuation to imply it to your massive and largely-unfamiliar-with-the-topic audience.
Strong disagree. If I know an important true fact, I can let people know in a way that doesn't cause legal liability for me.
Can you grapple with the fact that the "vague insinuation" is true? Like, assuming it's true and that Cade knows it to be true, your stance is STILL that he is not allowed to say it?
Your position seems to amount to epistemic equivalent of 'yes, the trial was procedurally improper, and yes the prosecutor deceived the jury with misleading evidence, and no the charge can't actually be proven beyond a reasonable doubt- but he's probably guilty anyway, so what's the issue'. I think the issue is journalistic malpractice. Metz has deliberately misled his audience in order to malign Scott on a charge which you agree cannot be substantiated, because of his own ideological opposition (which he admits). To paraphrase the same SSC post quoted above, he has locked himself outside of the walled garden. And you are "Andrew Cord", arguing that we should all stop moaning because it's probably true anyway so the tactics are justified.
It is not malpractice, because Cade had strong evidence for the factually true claim! He just didn't print the evidence. The evidence was of the form "interview a lot of people who know Scott and decide who to trust", which is a difficult type of evidence to put into print, even though it's epistemologically fine (in this case IT LED TO THE CORRECT BELIEF so please give it a rest with the malpractice claims).
Here is the evidence of Scott's actual beliefs:
https://twitter.com/ArsonAtDennys/status/1362153191102677001
As for your objections:
- First of all, this is already significantly different, more careful and qualified than what Metz implied, and that's after we read into it more than what Scott actually said. Does that count as "aligning yourself"?
This is because Scott is giving a maximally positive spin on his own beliefs! Scott is agreeing that Cade is correct about him! Scott had every opportunity to say "actually, I disagree with Murray about..." but he didn't, because he agrees with Murray just like Cade said. And that's fine! I'm not even criticizing it. It doesn't make Scott a bad person. Just please stop pretending that Cade is lying.
Relatedly, even if Scott did truly believe exactly what Charles Murray does on this topic, which again I don't think we can fairly assume, he hasn't said that, and that's important. Secretly believing something is different from openly espousing it, and morally it can be much different if one believes that openly espousing it could lead to it being used in harmful ways (which from the above, Scott clearly does, even in the qualified form which he may or may not believe). Scott is going to some lengths and being very careful not to espouse it openly and without qualification, and clearly believes it would be harmful to do so, so it's clearly dishonest and misleading to suggest that he has "aligns himself" with Charles Murray on this topic. Again, this is even after granting the very shaky proposition that he secretly does align with Charles Murray, which I think we have established is a claim that cannot be substantiated.
Scott so obviously aligns himself with Murray that I knew it before that email was leaked or Cade's article was written, as did many other people. At some point, Scott even said that he will talk about race/IQ in the context of Jews in order to ease the public into it, and then he published this. (I can't find where I saw Scott saying it though.)
- Further, Scott, unlike Charles Murray, is very emphatic about the fact that, whatever the answer to this question, this should not affect our thinking on important issues or our treatment of anyone. Is this important addendum not elided by the idea that he 'aligned himself' with Charles Murray? Would not that not be a legitimate "gripe"?
Actually, this is not unlike Charles Murray, who also says this should not affect our treatment of anyone. (I disagree with the "thinking on important issues" part, which Scott surely does think it affects.)
The epistemology was not bad behind the scenes, it was just not presented to the readers. That is unfortunate but it is hard to write a NYT article (there are limits on how many receipts you can put in an article and some of the sources may have been off the record).
Cade correctly informed the readers that Scott is aligned with Murray on race and IQ. This is true and informative, and at the time some people here doubted it before the one email leaked. Basically, Cade's presented evidence sucked but someone going with the heuristic "it's in the NYT so it must be true" would have been correctly informed.
I don't know if Cade had a history of "tabloid rhetorical tricks" but I think it is extremely unbecoming to criticize a reporter for giving true information that happens to paint the community in a bad light. Also, the post you linked by Trevor uses some tabloid rhetorical tricks: it says Cade sneered at AI risk but links to an article that literally doesn't mention AI risk at all.
Given o1, I want to remark that the prediction in (2) was right. Instead of training LLMs to give short answers, an LLM is trained to give long answers and another LLM summarizes.