I'm an independent researcher currently working on a sequence of posts about consciousness. You can send me anonymous feedback here: https://www.admonymous.co/rafaelharth. If it's about a post, you can add [q] or [nq] at the end if you want me to quote or not quote it in the comment section.
The Stanford Enyclopedia thing is a language game. Trying to make deductions in natural language about unrelated statements is not the kind of thing that can tell you what time is, one way or another. It can only tell you something about how we use language.
But also, why do we need an argument against presentism? Presentism seems a priori quite implausible; seems a lot simpler for the universe to be an unchanging 4d block than a 3d block that "changes over time", which introduces a new ontological primitive that can't be formalized. I've never seen a mathematical object that changes over time, I've only seen mathematical objects that have internal axes.
This all seems correct. The one thing I might add is that imE the usual effect of stating, however politely, that someone may not be 100% acting in good faith is to turn the conversation into much more of a conflict than it already was, which is why pretending as if it's an object level disagreement is almost always the correct strategy. But I agree that actually believing the other person is acting in good faith is usually quite silly.
(I also think the term is horrendous; irrc I've never used either "good faith" or "bad faith" in conversation.)
((This post also contributes to this nagging sense that I sometimes have that Zack is the ~only person on this platform who is actually doing rationality in a completely straight-forward way as intended, and everyone else is playing some kind of social game in which other considerations restrict the move set and rationality is only used to navigate within the subset of still permissible moves. I'm not in the business of fighting this battle, but in another timeline maybe I would be.))
Yeah, e.g., any convergent series.
This is assuming no expression that converges to the constants exists? Which I think is an open question. (Of course, it would only be finite if there are such expressions for all constants. But even so, I think it's an open question.)
As someone who expects LLMs to be a dead end, I nonetheless think this post makes a valid point and does so using reasonable and easy to understand arguments. I voted +1.
As I already commented, I think the numbers here are such that the post should be considered quite important even though I agree that it fails at establishing that fish can suffer (and perhaps lacks comparison to fish in the wild). If there was another post with a more nuanced stance on this point, I'd vote for that one instead, but there isn't. I think fish wellbeing should be part of the conversation more than it is right now.
It's also very unpleasant to think or write about these things, so I'm also more willing to overlook flaws than I'd be by default.
Shape can most certainly be emulated by a digital computer. The theory in the paper you linked would make a brain simulation easier, not harder, and the authors would agree with that
Would you bet on this claim? We could probably email James Pang to resolve a bet. (Edit: I put about 30% on Pang saying that it makes simulation easier, but not necessarily 70% on him saying it makes simulation harder, so I'd primarily be interested in a bet if "no idea" also counts as a win for me.)
It is not proposing that we need to think about something other than neuronal axons and dendrites passing information, but rather about how to think about population dynamics.
Really? Isn't the shape of the brain something other than axons and dendrites?
The model used in the paper doesn't take any information about neurons into account, it's just based on a mesh of the geometry of the particular brain region.
So this is the opposite of proposing a more detailed model of brain function is necessary, but proposing a courser-grained approximation.
And they're not addressing what it would take to perfectly understand or reproduce brain dynamics, just a way to approximately understand them.
The results (at least the flagship result) are about a coarse approximation, but the claim that anatomy restricts function still seems to me like contradicting the neuron doctrine.
Admittedly the neuron doctrine isn't well-defined, and there are interpretations where there's no contradiction. But shape in particular is a property that can't be emulated by digital computers, so it's a contradiction as far as the OP goes (if in fact the paper is onto something).
My probably contrarian take is that I don't think improvement on a benchmark of math problems is particularly scary or relevant. It's not nothing -- I'd prefer if it didn't improve at all -- but it only makes me slightly more worried.