Jessica Taylor. CS undergrad and Master's at Stanford; former research fellow at MIRI.
I work on decision theory, social epistemology, strategy, naturalized agency, mathematical foundations, decentralized networking systems and applications, theory of mind, and functional programming languages.
Blog: unstableontology.com
Twitter: https://twitter.com/jessi_cata
It means reductionism isn't strictly true as ontology. I suppose it might be more precise to talk about "reductionist physics" than "physics", although some might consider that redundant.
It isn't obvious that biological structure isn't efficiently readable from microstate. It at least doesn't seem cryptographically hard, so polynomical time in general.
With turbulence you can pretty much read the current macrostate from the current microstate? You just can't predict the future well.
I'd say homomorphic encryption computation facts, not just mental ones, are beyond physics in this sense. Other macro facts might be but it's of course less clear.
Again, the same ontological status applies to homomorphic encryption and other entities. However the same epistemic status doesn't apply. And the "efficiently determinable" criterion is an epistemic one.
A reason to pay attention to mental ones is that they are more salient as "hard to deny the existence of from some perspectives". Whereas you could say a regular homomorphic encryption fact is "not real" in the sense of "not being there in the state of reality at the current time".
and we don’t treat that as evidence that the visual appearance “exceeds physics.”
This is still something I'd disagree with? Like, it still seems notable that visual appearances aren't determined as an efficient function of physics. It suggests perhaps there is more to reality than physics, otherwise what are you seeing? "Appearances as such exceed physics" is not substantially different from what I mean as "mind exceeds physics". This seems like a minor semantic issue. Appearances are mental, so if appearances exceed physics than so does mind; I'm not meaning any strong statement like "mind, and only mind, exceeds physics".
I'm saying efficient reconstructibility is unclear in the rainbow case, but that the same principles have to explain it and non-efficiently-reconstructible cases like homomorphic encryption. I don't take this as a reducio but as a trilemma, see step 11.
Take a rainbow. Let p be the full microphysical state of the atmosphere and EM field, and let a be the appearance of the rainbow to an observer. The observer trivially “knows” a. Yet from p, even a quantum-bounded “Laplace’s demon” cannot, in general, P-efficiently compute the precise phenomenal structure of that appearance.
This may be true but it's really not obvious. The homomorphic encryption example makes one encounter such a case more clearly. If there's no hard encryption there, why couldn't Laplace's demon determine it efficiently?
That is an implausible conclusion. The physical state fully fixes the appearance; what fails is only efficient external reconstruction, not physical determination.
The thing you quoted and said was implausible had "efficiently" in it...
Homomorphic encryption sharpens the asymmetry between internal access and external decipherability, but it does not introduce a new ontological gap.
Yeah it just makes an existing problem more obvious.
At the end of the day the natural supervenience relation of observations on physics should work similarly in the rainbow case and the homomorphic encryption case. The homomorphic encryption case just makes more clear something that might have gotten skipped over in the rainbow case, "the natural supervenience relation need not be efficiently computable from the physical state; the information of the observations doesn't need to be directly sitting there, the way of picking it out might need to be a complicated function rather than a simple efficient 'location and extraction of information' one"
Here's a basic problem with infinite bases. Suppose duplicates its argument times. And suppose sums all entries. Now is not a sensible function.
So you really need to have some restriction. Like for example, maybe we interpret as requiring all but a finite number of entries to be zero. That would at least rule out . Now is not a "true infinite product" in the category-theory sense. But we would still have ("first" and "rest" of infinite list). Which might enable induction. I'm not sure.
Alternatively we could have be unrestricted, but then can't be defined. Either way there's an issue with allowing functions to or from to be represented by arbitrary infinite matrices.
EDIT: another framing of this is that "infinite product" ( unrestricted) and "infinite coproduct" ( restricted to all but finite being zero) come apart in . So there isn't strictly an infinite biproduct.
Ah. I think the inference they may take is that a paperclip maximizer is perfectly rational/coherent, as is a staple maximizer and so on. They don't think there are additional constraints as you suggest, beyond minimal ones like not having an "especially stupid" goal, such as "die as fast as possible".
I don't see how Bayesianism/vNM/expected utility theory should argue in favor of orthogonality.
I'm saying they argue against orthogonality in the post...
But isn't this subsumed by "above and beyond the computational tractability of that goal"
You seem to think either "diagonality" or "strong orthogonality" must hold. But the post is arguing the converse. I am arguing against strong orthogonality and against diagonality.
Rough argument against diagonality is something like "paperclip maximizer like entities seem like they would be possible / coherent" although there are some unknowns there like how different parts of the agent separated by large distances coordinate/mutate. But perhaps more basic than that is, if someone is making a strong claim (diagonality) they should probably justify it.
See Nick Land: Orthogonality, it has relevant excerpts including about Pythia.