Peter Gerdes
Peter Gerdes has not written any posts yet.

Peter Gerdes has not written any posts yet.

At a conceptual level I'm completely on board. At a practical level I fear a disaster. Right now you at least need to find a word which you can claim to be analyzing and that fact encourages a certain degree of contact and disagreement even if a hard subject like philosophy should really have 5 specific rebuttal papers (the kind journals won't publish) for each positive proposal rather than the reverse as they do now.
The problem with conceptual engineering for philosophy is that philosophers aren't really going to start going out and doing tough empirical work the way a UI designer might. All they are going to do is basically assert that... (read more)
I'd argue that this argument doesn't work because the places where CDT, EDT or some new system diverge from each other are outside of the set of situations in which decision theory is a useful way to think about the problems. I mean it is always possible to simply take the outside perspective and merely describe facts of the form: under such and such situations algorithm A performs better than B.
What makes decision theory useful is that it implicitly accommodates the very common (for humans) situation in which the world doesn't depend in noticeable ways (ie the causal relationship is so lacking in simple patterns it looks random to our
Seems like phrasing it in terms of decision theory only makes the situation more confusing. Why not just state the results in terms of: assuming there are a large number of copies of some algorithm A then there is more utility if A has such and such properties.
This works more generally. Instead of burying ourselves in the confusions of decision theory we can simply state results about what kind of outcomes various algorithms give rise to under various conditions.
I think we need to be careful here about what constitutes a computation which might give rise to an experience. For instance suppose a chunk of brain pops into existence but with all momentum vectors flipped (for non-nuclear processes we can assume temporal symmetry) so the brain is running in reverse.
Seems right to say that could just as easily give rise to the experience of being a thinking human brain. After all we think the arrow of time is determined by direction of decreasing entropy not by some weird fact that only computations which proced in one direction give rise to experiences.
Ok so far no biggie but why insist computations be embedded
You are making some unjustified assumptions about the way computations can be embedded in a physical process. In particular we shouldn't presume that the only way to instantiate a computation giving rise to an experience is via the forward evolution of time. See comment below.
That won't fix the issue. Just redo the analysis at whatever size is able to mereky do a few seconds of brain simulation.
Of course, no actual individual or program is a pure Bayesian. Pure Bayesian updating presumes logical omniscience after all. Rather, when we talk about Bayesian reasoning we idealize individuals as abstract agents whose choices (potentially none) have a certain probabilistic effect on the world, i.e., basically we idealize the situation as a 1 person game.
You basically raise the question of what happens in Newcomb like cases where we allow the agent's internal deliberative state to affect outcomes independent of explicit choices made. But whole model breaks down the moment you do this. It no longer even makes sense to idealize a human as this kind of agent and... (read more)
While I agree with your conclusion in some sense you are using the wrong notion of probability. The people who feel there is a right answer to the sleeping beauty case aren't talking about the kind of formally defined count over situations in some formal model. If that's the only notion of probability then you can't even talk about the probabilities of different physical theories being true.
The people who think there is a sleeping beauty paradox believe there is something like the rational credence one should have in a proposition given your evidence. If you believe this then you have a question to answer. What kind of... (read more)
Also, I think there is a fair bit of tension between your suggestion that we should be taking advice from others about how much things should hurt and the idea that we should use the degree of pain we feel as a way to identify abusive/harmful communities/relationships. I mean the more we allow the advice from those communities to determine whether we listen to those pain signals the less useful they are to us .
This is an interesting direction to explore but as is I don't have any idea what you mean by understand the go bot and I fear figuring that out would itself require answering more than you want to ask.
For instance, what if I just memorize the source code. I can slowly apply each step on paper and as the adversarial training process has no training data or human expert input if I know the rules of go I can, Chinese room style, fully replicate the best go bot using my knowledge given enough time.
But if that doesn't count and you don't just mean be better than them at go then you must have in mind that I'd somehow have the same 'insights' as the program. But now to state the challenge we need a precise (mathematical) definition that specifies the insights contained in a trained ML model which means we've already solved the problem.