All of Nox ML's Comments + Replies

Nox ML10

So what part of a mathematical universe do you find distasteful?

the idea that “2” exists as an abstract idea apart from any physical model

It's this one.

Okay, but if actual infinities are allowed, then what defines small in the “made up of small parts”? Like, would tiny ghosts be okay because they’re “small”?

Given that you're asking this question, I still haven't been clear enough. I'll try to explain it one last time. This time I'll talk about Conway's Game of Life and AI. The argument will carry over straightforwardly to physics and humans. (I k... (read more)

2TAG
Note that "Platonism false" does not imply "physicalism true". Numbers just might not be real entities at all, as in Formalism. If the AI discovers transfinite maths or continuum mechanics, that fact is also entirely determined by rules of the Game of Life and the initial state. And neither of them can apply to a GoL universe-- they are not "physics". Now, at this point, you need to choose between stipulating that the non-physical maths is false because it is non physical (finitism); or accepting that Platonism and physicalism are both false. But it's not maximally effective: maximal effectiveness would mean that any mathematical truth is a physical truth. If the physical universe is any way a subset of the mathematical "universe" , you have the same problem.
2Logan Zoellner
I mean, but our universe is not Conway's Game of Life.  Setting aside for now the problems with our universe being continuous/quantum weirdness/etc, the bigger issue has to do with the nature of the initial state of the board. Whether or not math would be unreasonably effective in a universe made out of Conway's Game of Life depends supremely on the initial state of the board.   If the board was initialized randomly, then it would already be in a maximum-entropy distribution, hence "minds" would have no predictive power and math would not be unreasonably effective.  Any minds that did come into existence would be similar to Boltzmann Brains in the sense that they would come into existence for one brief moment and then be destroyed the next. The initial board would have to be special for minds like ours  to exist in Conway's Game of Life.  The initial setup of the board would have to be in a specific configuration that allowed minds to exist for long durations of time and predict things.  And in order for that to be the case, there would have to be some universe wide set of rules governing how the board was set up.  This is analogous to how the number "2" is a thing mathematicians think is useful no matter where you go in our universe. Math isn't about some local deterministic property that depends on the interaction of simple parts but about the global patterns.
Nox ML10

My view is compatible with the existence of actual infinities within the physical universe. One potential source of infinity is, as you say, the possibility of infinite subdivision of spacetime. Another is the possibility that spacetime is unboundedly large. I don't have strong opinions one way or another on if these possibilities are true or not.

2Logan Zoellner
Okay, but if actual infinities are allowed, then what defines small in the "made up of small parts"? Like, would tiny ghosts be okay because they're "small"? The Unreasonable Effectiveness of Math makes a predictable claim: models which can be represented using concise mathematical notation are more likely to be true, but this includes the whole mathematical universe.   What part of the mathematical universe do you reject exactly? I'm still trying to understand this quote: And so far it sounds like you're fine with literal infinity.  So what part of a mathematical universe do you find distasteful?  Is it all infinities larger than 2ℵ0 or the idea that "2" exists as an abstract idea apart from any physical model, or something else?
Nox ML10

The assumption is that everything is made up of small physical parts. I do not assume or believe that it's easy to predict the large physical systems from those small physical parts. But I do assume that the behavior of the large physical systems is determined solely from their smaller parts.

The tautology is that any explanation about large-scale behavior that invokes the existence of things other than the small physical parts must be wrong, because those other things cannot have any effect on what happens. Note that this does not mean that we need to desc... (read more)

2Logan Zoellner
Maybe I'm just confused because I recently had an argument with someone who didn't believe in infinity. When I pointed out all of physics is based on the assumption that spacetime is continuous (an example of infinity) his response was essentially "we'll fix that someday". So, given that you deny "the mathematical universe", does that mean you think spacetime isn't continuous?  Or are "small physical parts" allowed to be infinitely subdivided?
Nox ML10

I completely agree that reasoning about worlds that do not exist reaches meaningful conclusions, though my view classifies that as a physical fact (since we produce a description of that nonexistent world inside our brains, and this description is itself physical).

it becomes apparent that if our physical world wasn’t real in a similar sense, literally nothing about anything would change as a result.

It seems to me like if every possible world is equally not real, then expecting a pink elephant to appear next to me after I submit this post seems just as ... (read more)

2Vladimir_Nesov
What I mean by reaching meaningful conclusions about counterfactuals is that you start with a problem statement, a description of a possibly counterfactual situation, and then you see what follows from that. You don't get to decide that pink elephants follow just because the situation is counterfactual, any pink elephants would need to follow from the particular problem statement that you start with. Existence of other counterfactuals (other possible worlds) with pink elephants is completely irrelevant, because we are not reasoning about them at the moment. Similarly, if you reason about the physical world that isn't real, it doesn't matter that there are other alternative physical worlds that are also not real with different properties, because we are reasoning about this particular not-real world, not those other ones. The problem statement constrains the expectations, not reality of the thing referenced by the problem statement.
Nox ML10

I will refer to this other comment of mine to explain this miscommunication.

2Logan Zoellner
I'm afraid that doesn't make your position any more clear to me. The tautological belief that everything is made of small physical parts itself not a "small physical part", it is one of the most broad claims about the universe possible. It seems to me that you belief in at least 3 truths axiomatically: 1. The universe has the convenient property that the outcomes of large physical systems can be predicted by small systems such as human minds. 2. All systems are made up of small physical parts 3. There are no other axioms besides 1. 2. and this one Even if I accept 1. then 2. and 3. neither follow from 1. nor do they have additional predictive power that would cause me to accept them.  
Nox ML10

Reasoning being real and the thing it reasons about being real are different things.

I do agree with this, but I am very confused about what your position is. In your sibling comment you said this:

Possibly the fact that I perceive the argument about reality of physics as both irrelevant and incorrect (the latter being a point I didn’t bring up) caused this mistake in misperceiving something relevant to it as not relevant to anything.

The existence of physics is a premise in my reasoning, which I justify (but cannot prove) by using the observation that... (read more)

2Vladimir_Nesov
I don't see how reality of physics is used in your reasoning. I did see that you claim that you do use it, and that you mentioned it in the posts, but I don't see how it's doing some work in some argument. I don't see how humanity accomplishing incredible things has anything to do with the world being real, we could similarly accomplish incredible things in a world that isn't real. My frame is grounded in thinking about decision theory, where one thing that keeps coming up is counterfactuals, reasoning about what would happen under conditions that at some point are revealed to in fact fail to hold. This is reasoning about situations and worlds that are not real, and for this form of decision making to make sense, it's necessary for the reasoning about worlds that are not real to reach meaningful conclusions. This makes discussion of detailed claims about worlds that are not real a normal thing, not something strange. And when that crystallizes into an intuitive way of looking at things in general, it becomes apparent that if our physical world wasn't real in a similar sense, literally nothing about anything would change as a result. Relevance of the world being real is largely an illusion. What matters about the world being real is that it seems to be the case that we care about what happens in the physical world, possibly more than about what happens in some other hypothetical worlds. That's a formulation of the meaning of the physical world being real that's more clear to me than how these words are normally used (i.e. without a satisfactory clarification or an argument for truth of the claim that might also serve that purpose).
Nox ML10

Okay, let's forget the stuff about the "I", you're right that it's not relevant here.

For existence in the sense that physics exists, I don’t see how it’s relevant for reasoning, but I do see how it’s relevant to decision making

Okay, I think my view actually has some interesting things to say about this. Since reasoning takes place in a physical brain, reasoning about things that don't exist can be seen as a form of physical experiment, where your brain builds a description which has properties which we assume the thing that doesn't exist would have if ... (read more)

2Vladimir_Nesov
Reasoning being real and the thing it reasons about being real are different things. Truth of an experiment being real is not the same as relevance of the experiment being real. Would the experiment behave differently if it's not real? I'd say it would need to be a different experiment to do that, not being real wouldn't suffice, so the distinction of being real vs. not is not useful for this purpose.
2Vladimir_Nesov
No, I believe I'm wrong. I reversed the point about "I" in the grandparent about 10 minutes after posting, which turns out to be too late. I originally said that I don't see how that's relevant at all, but then noticed that it's not true, since it's relevant to your argument about reality of physics. Possibly the fact that I perceive the argument about reality of physics as both irrelevant and incorrect (the latter being a point I didn't bring up) caused this mistake in misperceiving something relevant to it as not relevant to anything.
Nox ML10

I don't say in this post that everything can be deduced from bottom up reasoning.

2Logan Zoellner
Nox ML10

The fact that I live in a physical world is just a fact that I've observed, it's not a part of my values. If I lived in a different world where the evidence pointed in a different direction, I would reason about the different direction instead. And regardless of my values, if I stopped reasoning about the physical world, I would die, and this seems to me to be an important difference between the physical world and other worlds I could be thinking about.

Of course this is predicated on the concept of "I" being meaningful. But I think that this is better supported by my observations than the idea that every possible world exists and the idea that probability just represents a statement about my values.

2Vladimir_Nesov
To formulate useful abstractions, it's important to figure out what data is relevant in what arguments. For existence in the sense that physics exists, I don't see how it's relevant for reasoning (including about physics), but I do see how it's relevant to decision making, and that is the distinction I pointed out. Where the fact itself comes from is distinct from where it's useful, and I'm pointing specifically at the latter here. You then say "this is predicated on the concept of "I" being meaningful". Presumably it's the argument for reality of physics you gave that's predicated on this, not something else. Parity of N⋅(N+1) is not predicated on this, it shouldn't be part of the proof of evenness. So it's another example for the principle I'm describing in this comment. (To be clear, I'm not ruling out being wrong about the claim of something not being relevant. But the relevant wrongness would need to be in the form of it turning out to be relevant to the particular arguments in question after all, rather than merely true or known, or relevant to something else.)
Nox ML10

clearly physical brains can think about non physical things.

Yes, but this is not evidence for the existence of those things.

But it’s not conclusive in every case, because the simplest adequate explanation need not be a physical explanation.

There is one notion of simplicity where it is conclusive in every case: every explanation has to include physics, and then we can just cut out the extra stuff from the explanation to get one that postulates strictly less things and has equally good predictions.

But you're right, there are other notions of simple fo... (read more)

2TAG
I didn't say it was. Why posit that an explanation has to include physics even in cases, like this, where it adds nothing? In those cases it's simpler not to include physics.
Nox ML10

Whatever "built on top of" means.

In ZFC, the Axiom of Infinity can be written entirely in terms of ∈, ∧, ¬, and ∀. Since all of math can be encoded in ZFC (plus large cardinal axioms as necessary), all our knowledge about infinity can be described with ∀ as our only source of infinity.

Only for the subset of maths that’s also physical. You can’t resolve the Axiom of Choice problem that way.

You can't resolve the Axiom of Choice problem in any way. Both it and its negation are consistent.

Nox ML10

Again: every mathematical error is a real physical even in someone’s brain, so , again, physics guarantees nothing.

I don't get what you're trying to show with this. If I mistakenly derive in Peano Arithmetic that 2 + 2 = 3, I will find myself shocked when I put 2 apples inside a bag that already contains 2 apples and find that there are now 4 apples in that bag. Incorrect mathematical reasoning is physically distinguishible from correct mathematical reasoning.

There are of course, lots of infinities in maths.

Everything we know about all other infinities can be built on top of just FORALL in first-order logic.

2TAG
Only for the subset of maths that's also physical. You can't resolve the Axiom of Choice problem that way. Whatever "built on top of" means. Clearly, we can intend transfinite models.
Nox ML10

Sure, I think I agree. My point is that because all known reasoning takes place in physics, we don't need to assume that any of the other things we talk about exist in the same way that physics does.

I even go a little further than that and assert that assuming that any non-physical thing exists is a mistake. It's a mistake because it's impossible for us to have evidence in favor of its existence, but we do have evidence against it: that evidence is known as Occam's Razor.

2Vladimir_Nesov
There is also little point in saying that physics exists. You can reason about physics the same way as about other things, namely without having any use for the assumption that it exists. Existence of physics makes sense as a statement about values, a claim of caring about physics more than about some other things (which don't exist in this sense). This is an ingredient of decision theory, not of reasoning.
2TAG
I can't follow your syntax, but clearly physical brains can think about non physical things. But it's not conclusive in every case, because the simplest adequate explanation need not be a physical explanation.
Nox ML10

Physics doesn’t guarantee that mathematical reasoning works.

All of math can be built on top of first-order logic. In the sub-case of propositional logic, it's easy to see entirely within physics that if I observe that "A AND B" corresponds to reality, then when I check if "A" corresponds to reality, I will also find that it does. Every such deduction in propositional logic corresponds to something you can check in the real physical world.

The only infinity in first-order logic are quantifiers, of which only one is needed: FORALL, which is basically just ... (read more)

2TAG
Again: every mathematical error is a real physical even in someone's brain, so , again, physics guarantees nothing. There are of course, lots of infinities in maths. Our ability to reason about them means mathematical reasoning includes symbolic reasoning, not just direct calculation. That's a computation or pyschology-level observation -- it doesn't add anything to point out that brains are made of quarks.
Nox ML10

I haven't used the word "reduce" since you gave a definition of it in the other thread which didn't match the precise meaning I was aiming for. The meaning I am aiming for is given in this paragraph from this post:

If we take as assumption that everything humans have observed has been made up of smaller physical parts (except possibly for the current elementary particles du jour, but that doesn’t matter for the sake of this argument) and that the macro state is entirely determined by the micro state (regardless of if it’s easy to compute for humans), ther

... (read more)
Nox ML10

There are answers to that question.

If you don't mind, I would be interested in a link to a place that gives those answers, or at least a keyword to look up to find such answers.

3TAG
It's called philosophy of maths.
Nox ML10

Well if you're not saying it, then I'm saying it: this is a mysterious fact about physics ;P

I interpreted "which is not the same as being some sort of refutation" as being disagreement, and I knew my use of the word "contradicts" was not entirely correct according to its definition, but I couldn't think of a more accurate word so I figured it was "close enough" and used it anyway (which is a bad communication habit I should probably try to overcome, now that I'm explicitly noticing it). I'm sorry if I came across harshly in my comment.

Nox ML10

I disagree that what you're saying contradicts what I'm saying. The physical world is ordered in such a way that the reasoning you described works: this is a fact about physics. You are correct that it is a mysterious fact about physics, but positing the existence of math does not help explain it, merely changes the question from "why is physics ordered in this way" to "why is mathematics ordered in this way".

2TAG
Physics doesn't guarantee that mathematical reasoning works. There are answers to that question.
2Vladimir_Nesov
Reasoning practiced in the physical world is a genre of computation, so what matters about physics in enabling reasoning is that it can be used to implement computation, to build computers. The reasoning can then be about all sorts of things, physics being among them but not different from others in a way relevant to what makes computation reasoning.
2Vladimir_Nesov
I'm not claiming that there is a mysterious fact about physics here, or that what I'm saying contradicts what you're saying. I sketched a point that makes sense to me and stands on its own, vaguely hoping but not claiming that it's relevant or helpful. It can be very difficult to communicate or discuss an issue that's not clearly formulated, so that exchanging smaller and more clearly formulated arguments that don't depend on comprehending the specific issue is more practical.
Nox ML10

This is fair, though the lack of experiments showing the existence of anything macro that doesn't map to sub-micro state also adds a lot of confidence, in my opinion, since the amount of hours humans have put into performing scientific experiments is quite high at this point.

Generally I'd say that the macro-level irrelevance of an assumption means that you can reject it out of hand, and lack of micro-level modelling means that there is work to be done until we understand how to model it that way.

2TAG
There are things that don't have reductive explanations. We didn't get a reductive explanation of consciousness the day we found out brains a re made of neurons. Whether "map to" means "explained by" is another question.
Nox ML10

If you accept that the existence of mathematical truths beyond physical truths cannot have any predictive power, then how do you reconcile that with this previous statement of yours:

Presupposing things without evidence

As you can see, I am not doing that.

I will say again that I don't reject any mathematics. Even 'useless' mathematics is encoded inside physical human brains.

2TAG
If they did have predictive power, they would be physical truths. And wrong mathematics, and stuff that isn't mathematics at all. The observation you keep making doesn't explain anything ... it doesn't tell you what maths is, and it doesn't telly you what makes true maths true ... so it's not an explanatory reduction ... so it's not a reduction at all, as most people use the term.
Nox ML10

If we take as assumption that everything humans have observed has been made up of smaller physical parts (except possibly for the current elementary particles du jour, but that doesn't matter for the sake of this argument) and that the macro state is entirely determined by the micro state (regardless of if it's easy to compute for humans), there is a simple conclusion that follows logically from that.

This conclusion is that nothing extraphysical can have any predictive power above what we can predict from knowledge about physics. This follows because for s... (read more)

2TAG
Thats just a long winded way of saying that the subset of mathematical truth which does the same job as physics -- predicting things about the world -- is the same as physical truth. Which is a tautology. The problem is that mathematical truth is larger than the set of physical truths and a lot of it is physically useless.... and the set of mathematical truths is larger than the set of physical truths because a lot of it is physically useless.
Nox ML10

This is only correct if we presuppose that the concept of mathematically true is a meaningful thing separate from physics. The point this post is getting at is that we can still accept all human mathematics without needing to presuppose that there is such a thing. Since not presupposing this is strictly simpler, and presupposing it does not give us any predictive power, we ought not to assume that mathematics exists separately from physics.

This is not just a trivial detail. Presupposing things without evidence is the same kind of mistake as Russell's teapot, and small mistakes like that will snowball into larger ones as you build your philosophy on top of them.

2TAG
That's not an extraordinary claim: Mathematics uses a different notion of proof to physics, so at the very least it has a different set of truths, and quite possibly a different concept of proof. I would say that the reverse claim is extraordinary, since it means that physicists are wasting huge sums on particle accelerators, when they only need pencil and paper. A theory needs to be as simple as possible, under the constraint that it still explains the facts. The facts are that physics is empirical, maths is apriori, and most mathematical truth isn't physical truth. As you can see, I am not doing that.
Nox ML10

I agree that they are not symmetrical. My point with that thought experiment was to counter one of their arguments, which as I understand it can be paraphrased to:

In your thought experiment, the people who bet that they are in the last 95% of humans only win in aggregate, so there is still no selfish reason to think that taking that bet is the best decision for an individual.

My thought experiment with the dice was meant to show that this reasoning also applies to regular expected utility maximization, so if they use that argument to dismiss all anthrop... (read more)

Nox ML10

You do this 100 times, would you say you ought to find your number >5 about 95 times?

I actually agree with you that there is no single answer to the question of "what you ought to anticipate"! Where I disagree is that I don't think this means that there is no best way to make a decision. In your thought experiment, if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.

My justification for this is that objectively, those who make decisions this way will tend to have more reward a... (read more)

1dadadarren
I am a little unsure about your meaning here. Say you get a reward for guessing if your number is <5 correctly, then would you also guess your number is <5 each time?  I'm guessing that is not what you mean, but instead, you are thinking as the experiment is repeated more and more the relative frequency of you finding your own number >5 would approach 95%. What I am saying is this belief requires an assumption about treating the "I" as a random sample. Whereas for the non-anthropic problem, it doesn't. 
Nox ML32

By pretty much every objective measure, the people who accept the doomsday argument in my thought experiment do better than those who don't. So I don't think it takes any additional assumptions to conclude that even selfish people should say yes.

From what I can tell, a lot of your arguments seem to be applicable even outside anthropics. Consider the following experiment. An experimenter rolls a fair 100-sided die. Then they ask someone to guess if they rolled a number >5 or not, giving them some reward if they guess correctly. Then they reroll and ask a... (read more)

1dadadarren
For the non-anthropic problem, why take the detour of asking a different person each toss? You can personally take it 100 times, and since it's a fair die, it would be around 95 times that it lands >5. Obviously guessing yes is the best strategy for maximizing your personal interest. There is no assuming the I" as a random sample, or making forced transcodings.  Let me construct a repeatable anthropic problem. Suppose tonight during your sleep you will be accurately cloned with memory preserved. Waking up the next morning, you may find yourself to be the original or one of the newly created clones. Let's label the original No.1 and the 99 new clones No,2 to No 100 by the chronological order of their creation. Doesn't matter if you are old or new you can repeat this experiment. Say you take the experiment repeatedly: wake up and fall asleep and let the cloning happen each time. Everyday you wake up, you will find your own number. You do this 100 times, would you say you ought to find your number >5 about 95 times? My argument says there is no way to say that. Doing so would require assumptions to the effect of your soul having an equal chance of embodying each physical copy, i.e. "I" am a random sample among the group.  For the non-anthropic problem, you can use the 100-people version as a justification. Because among those people the die tosser choosing you to answer a question is an actual sampling process. It is reasonable to think in this process you are treated the same way as everyone. E.g. the experiment didn't specifically sample you only for a certain number. But there is no sampling process determining which person you are in the anthropic version. Let alone assume the process is treating you indifferently among all souls or treating each physical body indifferently in your embodiment process.  Also, people believing the Doomsday Argument objectively perform better as a group in your thought experiment is not a particularly strong case. Thirders have al
1Ape in the coat
For me this is where the symmetry with doomsday argument breaks. Because here the result of the die roll is actually randomly selected from a distribution from 1 to 100.  While with doomsday argument it's not the case. I'm not selected among all the humans throughout the time to be instantiated  in 21st century. That's not how causal process that produced me works. Actually, that's not how causality itself works. Future humans causally depend on the past humans it's not an independant random variable at all.
1omegastick
How does the logic here work if you change the question to be about human history? Guessing a 50/50 coin flip is obviously impossible, but if Omega asks whether you are in the last 50% of "human history" the doomsday argument (not that I subscribe to it) is more compelling. The key point of the doomsday argument is that humanity's growth is exponential, therefore if we're the median birth-rank human and we continue to grow, we don't actually have that long (in wall-time) to live.
Nox ML155

Suppose when you are about to die, time freezes, and Omega shows up and tells you this: "I appear once to every human who has ever lived or will live, right when they are about to die. Answer this question with yes or no: are you in the last 95% of humans who will ever live in this universe? If your answer is correct, I will bring you to this amazing afterlife that I've prepared. If you guess wrong, you get nothing." Do you say yes or no?

Let's look at actual outcomes here. If every human says yes, 95% of them get to the afterlife. If every human says no, 5... (read more)

1Q Home
There's a chance you're changing the nature of the situation by introducing Omega. Often "beliefs" and "betting strategy" go together, but here it may not be the case. You have to prove that the decision in the Omega game has any relation to any other decisions. There's a chance this Omega game is only "an additional layer of tautology" which doesn't justify anything. We need to consider more games. I can suggest a couple of examples. Game 1: One person can argue it becomes beneficial to "lie" about your beliefs/adopt temporal doublethink. Another person can argue for permanently changing your mind about magic. Game 2: You can argue "jumping means death, the reward is impossible to get". Unless you have access to true randomness which can vary across perfect copies of the situation. IDK. Maybe "making the Doomsday update beneficially" is impossible. You did touch on exactly that, so I'm not sure how much my comment agrees with your opinions.
1red75prime
Suppose something pertaining more to the real world: if you think that you are here and now because there will not be significantly more people in the future, then you are more likely to become depressed. Also, why Omega uses 95% and not 50%, 10%, or 0.000001%? ETA: Ah, Omega in this case is an embodiment of the litany of Tarski. Still, if there will be no catastrophe we are those 5% who violate the litany. Not saying that the litany comes closest to useless as it can get when we are talking about a belief in an inevitable catastrophe you can do nothing about.
0dadadarren
I have actually written about this before. In short, there is no rational answer to Omega's question, to answer Omega, I can only look at the past and present situation and try to predict the future the best I could. There is no rational way to incorporate my birth rank in the answer.  The question is about "me" specifically. And my goal is to maximize my chance of getting a good afterlife. In contrast, the argument you mentioned judge the answer's merit by evaluating the collective outcome of all humans: "If everyone guesses this way then 95% of all would be correct ...". But if everyone is making the same decision, and the objective is the collective outcome of the whole group, then the individual "I" plays no part in it. To assert this answer based on the collective outcome is also the best answer for "me" requires additional assumptions. E.g. considering myself as a random sample from all humans. That is why you are right in saying "If you accept that it's better to say yes here, then you've basically accepted the doomsday argument." In this post I have used a repeatable experiment to demonstrate this. And the top comment by benjamincosman and my subsequent replies might be relevant. 
Nox ML63

I like the distinctions you make between sentient, sapient, and conscious. I would like to bring up some thoughts about how to choose a morality that I think are relevant to your points about death of cows and transient beings, which I disagree with.

I think that when choosing our morality, we should do so under the assumption that we have been given complete omnipotent control over reality and that we should analyze all of our values independently, not taking into consideration any trade-offs, even when some of our values are logically impossible to satisf... (read more)

6Nathan Helm-Burger
That's an interesting way of reframing the issue. I'm honestly just not sure about all of this reasoning, and remain so after trying to think about it with your reframing, but I feel like this does shift my thinking a bit. Thanks. I think probably it makes sense to try reasoning both with and without tradeoffs, and then comparing the results.
Nox ML44

The reason I reject all the arguments of the form "mental models are embedded inside another person, therefore they are that person" is that this argument is too strong. If a conscious AI was simulating you directly inside its main process, I think you would still qualify as a person of your own, even though the AI's conscious experience would contain all your experiences in much the same way that your experience contains all the experiences of your character.

I also added an addendum to the end of the post which explains why I don't think it's safe to assume that you feel everything your character does the same way they do.

1Myron Hedderson
To be clear, I do not endorse the argument that mental models embedded in another person are necessarily that person. It makes sense that a sufficiently intelligent person with the right neural hardware would be able to simulate another person in sufficient detail that that simulated person should count, morally. I appreciate your addendum, as well, and acknowledge that yes, given a situation like that it would be possible for a conscious entity which we should treat as a person to exist in the mind of another conscious entity we should treat as a person, without the former's conscious experience being accessible to the latter. What I'm trying to express (mostly in other comments) is that, given the particular neural architecture I think I have, I'm pretty sure that the process of simulating a character requires use of scarce resources such that I can only do it by being that character (feeling what it feels, seeing in my mind's eye what it sees, etc.), not run the character in some separate thread. Some testable predictions: If I could run two separate consciousnesses simultaneously in my brain (me plus one other, call this person B) and then have a conversation with B, I would expect the experience of interacting with B to be more like the experience of interacting with other people, in specific ways that you haven't mentioned in your posts. Examples: I would expect B to misunderstand me occasionally, to mis-hear what I was saying and need me to repeat, to become distracted by its own thoughts, to occasionally actively resist interacting with me. Whereas the experience I have is consistent with the idea that in order to simulate a character, I have to be that character temporarily - I feel what they feel, think what they think, see what they see, their conscious experience is my conscious experience, etc.  - and when I'm not being them, they aren't being. In that sense, "the character I imagine" and "me" are one. There is only one stream of consciousness, anyway
Nox ML30

I think we just have different values. I think death is bad in itself, regardless of anything else. If someone dies painlessly and no one ever noticed that they had died, I would still consider it bad.

I also think that truth is good in and of itself. I want to know the truth and I think it's good in general when people know the truth.

Here, I technically don’t think you’re lying to the simulated characters at all—in so far as the mental simulation makes them real, it makes the fictional world, their age, and their job real too.

Telling the truth to a men... (read more)

3Yair Halberstadt
Sure, it's technically possible. My point is that on human hardware is impossible. We don't have the resources to simulate someone without it affecting our own mental state.
2Yair Halberstadt
Why? I mean sure, ultimately morality is subjective, but even so, a morality with simpler axioms is much more attractive than ones with complex axioms like "death is bad" and "truth is good". Once you have such chunky moral axioms, why is your moral system better than "orange juice is good" and "broccoli is bad". Raw utilitarianism at least has only one axiom. The only good thing is conscious beings utility (admittedly a complex chunky idea too, but at least it's only one, rather than requiring hundreds of indivisible core good and bad things).
Nox ML10

Points similar to this have come up in many comments, so I've added an addendum at the end of my post where I give my point of view on this.

1[anonymous]
I'd understood that already, but I would need a reason to find that believable, because it seems really unlikely. You are not directly simulating the cognitive structures of the being, it's impossible, the only way you are simulating someone is by repurposing your cognitive structures to simulate them, and then the intensity of their emotions is the same as what you registered. How simple do you think the emergency of subjective awareness is?, most people will say that you need dedicated cognitive structures to generate the subjective I, even in theories that are mostly just something like strange loops or higher-level awareness, like HOT or AST, you at least still need a bound locus to experience. If that's so, then there's no room for conscious simulacra that feel things that the simulator doesn't. This is from a reply that I gave to Vladimir:
Nox ML30

I can definitely create mental models of people who have a pain-analogue which affects their behavior in ways similar to how pain affects mine, without their pain-analogue causing me pain.

there’s no point on reducing this to a minimal Platonic concept of ‘simulating’ in which simulating excruciating pain causes excruciating pain regardless of physiological effects.

I think this is the crux of where we disagree. I don't think it matters if pain is "physiological" in the sense of being physiologically like how a regular human feels pain. I only care if th... (read more)

1[anonymous]
How are you confident that you've simulated another conscious being that feels emotions with the same intensity as the ones you would feel if you were in that situation?, instead of just running a low-fidelity simulation with decrease emotional intentisity, which is how it registers within your brain's memories. Whatever subjective experience you are simulating, it's still running in your brain and with the cognitive structures that you have to generate your subjective I (I find this to be the simplest hypothesis), and that means that the simplest conclusion to draw is that whatever your simulation felt gets registered in your brain's memories, and if you find that those emotions lack much of the intensity that you would experience if you were to be in that situation, that is also the degree of emotional intensity that that being felt while being simulated.
Nox ML51

I don't personally think I'm making this mistake, since I do think that saying "the conscious experience is the data" actually does resolve my confusion about the hard problem of consciousness. (Though I am still left with many questions.)

And if we take reductionism as a strongly supported axiom (which I do), then necessarily any explanation of consciousness will have to be describable in terms of data and computation. So it seems to me that if we're waiting for an explanation of experience that doesn't boil down to saying "it's a certain type of data and computation", then we'll be waiting forever.

4Richard_Kennaway
This is a tautology. To me, the "axiom" is no more than a hypothesis. No-one has come up with an alternative that does not reduce to "magic", but neither has anyone found a physical explanation that does not also reduce to "magic". Every purported explanation has a step where magic has to happen to relate some physical phenomenon to subjective experience. Compare "life". At one time people thought that living things were distinguished from non-living things by possession of a "life force". Clearly a magical explanation, no more than giving a name to a thing. But with modern methods of observation and experiment we are able to see that living things are machines all the way down to the level of molecules, and "life force" has fallen by the wayside. There is no longer any need of that hypothesis. The magic has been dissolved. Explaining the existence of subjective experience has not reached that point. We are no nearer to it than mediaeval alchemists searching for the philosopher's stone.
Nox ML30

My best guess about what you mean is that you are referring to the part in the "Ethics" section where I recommend just not creating such mental models in the first place?

To some extent I agree that mortality doesn't mean it should've never lived, and indeed I am not against having children. However, after stumbling on the power to create lives that are entirely at my mercy and very high-maintenance to keep alive, I became more deontological about my approach to the ethics of creating lives. I think it's okay to create lives, but you must put in a best effo... (read more)

2Vladimir_Nesov
The same way you can simulate characters that are not physical people on this world, and simulate their emotions without experienceing them yourself, you can simulate a world where they live. The fact that you are simulating them doesn't affect the facts of what's happening in that world. Platonically, there are self-aware people in their own world. Saying that the world is fictional, or that they are characters, or that they are not X years old, that they don't have Y job, would be misleading. Also, you can't say it to them in their world, since you are not in their world. You can only say it to them in your world, which requires instantiating them in your world, away from all they know. Then there are mental models of those people, who are characters from a fictional world, not X years old, don't have Y job, live in your head. These mental models have the distinction of usually not being self-aware. When you explain their situation to them, you are making them self-aware.
Nox ML10

I wouldn't quite say it's a typical mind fallacy, because I am not assuming that everyone is like me. I'm just also not assuming that everyone is different from me, and using heuristics to support my inference that it's probably not too uncommon, such as reports by authors of their characters surprising them. Another small factor in my inference is the fact that I don't know how I'd write good fiction without making mental models that qualified as people, though admittedly I have very high standards with respect to characterization in fiction.

(I am aware t... (read more)

Nox ML66

The reason I care if something is a person or not is that "caring about people" is part of my values. I feel pretty secure in taking for granted that my readers also share that value, because it's a pretty common one and if they don't then there's nothing to argue about since we just have incompatible utility functions.

What would be different if it were or weren’t, and likewise what would be different if it were just part of our person-hood?

One difference that I would expect in a world where they weren't people is that there would be some feature you c... (read more)

1Myron Hedderson
I elaborated on this a little elsewhere, but the feature I would point to would be "ability to have independent subjective experiences". A chicken has its own brain and can likely have a separate experience of life which I don't share, and so although I wouldn't call it a person, I'd call it a being which I ought to care about and do what I can to see that it doesn't suffer. By contrast, if I imagine a character, and what that character feels or thinks or sees or hears, I am the one experiencing that character's (imagined) sensorium and thoughts - and for a time, my consciousness of some of my own sense-inputs and ability to think about other things is taken up by the simulation and unavailable for being consciously aware of what's going on around me. Because my brain lacks duplicates of certain features, in order to do this imagining, I have to pause/repurpose certain mental processes that were ongoing when I began imagining. The subjective experience of "being a character" is my subjective experience, not a separate set of experiences/separate consciousness that runs alongside mine the way a chicken's consciousness would run alongside mine if one was nearby. Metaphorically, I enter into the character's mindstate, rather than having two mindstates running in parallel. Two sets of simultaneous subjective experiences: Two people/beings of potential moral importance. One set of subjective experiences: One person/being of potential moral importance. In the latter case, the experience of entering into the imagined mindstate of a character is just another experience that a person is having, not the creation of a second person.
2JoeTheUser
If one is acting in the world, I would say one's sense of what a person is has to intimately connected with value of "caring about people". My caring about people is connecting to my experience of people - there are people I never met I care about in the abstract but that's from extrapolating my immediate experience of people.  It seems like an easy criteria would be "exist entirely independently from me". My mental models of just about everything, including people, are sketchy, feel like me "doing something", etc. I can't effortlessly have a conversation with any mental model I have of a person, for example. Oddly, enough I can have a conversation with another as one of my mental models or internals characters (I'm a frequency DnD GM and I have NPCs I often like playing). Mental models and characters seem more like add-ons to my ordinary consciousness. 
Nox ML10

I do not think that literally any mental model of a person is a person, though I do draw the line further than you.

What are your reasons for thinking that mental models are closer to markov models than tulpas? My reason for leaning more on the latter side is my own experience writing, where I found it easy to create mental models of characters who behaved coherently and with whom I could have long conversations on a level above even GPT4, let alone markov models.

Another piece of evidence is this study. I haven't done any actual digging to see if the method... (read more)

3faul_sname
I think this may just be a case of the typical mind fallacy: I don't model people in that level of detail in practice and I'm not even sure I'm capable of doing so. I can make predictions about "the kind of thing a person might say" based on what they've said before, but those predictions are more at the level of turns-of-phrase and favored topics of conversation -- definitely nothing like "long conversations on a level above GPT-4". The "why people value remaining alive" bit might also be a typical mind fallacy thing. I mostly think about personal identity in terms of memories + preferences. I do agree that my memories alone living on after my body dies would not be close to immortality to me. However, if someone were to train a multimodal ML model that can produce actions in the world indistinguishable from the actions I produce (or even "distinguishable but very very close"), I would consider that to be most of the way to effectively being immortal, assuming that model were actually run and had the ability to steer the world towards states which it prefers. Conversely, I'd consider it effectively-death to be locked in a box where I couldn't affect the state of the outside world and would never be able to exit the box. The scenario "my knowledge persists and can be used by people who share my values" would be worse, to me, than remaining alive but better than death without preserving my knowledge for people who share my values (and by "share my values" I basically just mean "are not actively trying to do things that I disprefer specifically because I disprefer them").
Nox ML10

I disagree that it means that all thinking must cease. Only a certain type of thinking, the one involving creating sufficiently detailed mental models (edit: of people). I have already stopped doing that personally, though it was difficult and has harmed my ability to understand others. Though I suppose I can't be sure about what happens when I sleep.

Still, no, I don't want everyone to die.

1[anonymous]
The subjective awareness that you simulate while simulating a character or real person's mind is pretty low-fidelity, and when you imagine someone suffering I assume your brain doesn't register it with the level of suffering you would experience, mine certainly doesn't. Some people experience hyper-empathy and some can imagine certain types of qualia experiences as actually experienced The people that only belong to the second type probably still don't simulate accurate experiences of excruciating pain that feel like excruciating pain, because there's no strong physiological effects of those that correlate with that experience. Even if the brain is simulating a person,it's pretty unbelievable to say that the brain doesn't work like always and still creates the same exact experience (I don't have memories of that in my brain while simulating). Even if the subjective I is swapped (in whatever sense), the simulation still registers in the brain's memories, and in my case I don't have any memories of simulating a lot of suffering. Does that apply to you?
Nox ML30

That's right. It's why I included the warning at the top.

1Ben Livengood
It's okay because mathematical realism can keep modeling them long after we're gone.
1Shmi
Oh, you are biting this bullet with gusto! Well, at least you are consistent. Basically, all thinking must cease then. If someone doubted that there would be a lot of people happy to assist an evil AI in killing everyone, you are an example of a person with such a mindset: consciousness is indescribably evil.
Nox ML10

One of my difficulties with this is that it seems to contradict one of my core moral intuitions, that suffering is bad. It seems to contradict it because I can inflict truly heinous experiences onto my mental models without personally suffering for it, but your point of view seems to imply that I should be able to write that off just because the mental model happens to be continuous in space-time to me. Or am I misunderstanding your point of view?

To give an analogy and question of my own, what would you think about an alien unaligned AI simulating a human ... (read more)

2metachirality
To the first one, they aren't actually suffering that much or experiencing anything they'd rather not experience because they're continuous with you and you aren't suffering. I don't actually think a simulated human would be continuous in spacetime with the AI because the computation wouldn't be happening inside of the qualia-having parts of the AI.
Nox ML*10

Your heuristic is only useful if it's actually true that being self-sustaining is strongly correlated with being a person. If this is not true, then you are excluding things that are actually people based on a bad heuristic. I think it's very important to get the right heuristics: I've been wrong about what qualified as a person before, and I have blood on my hands because of it.

I don't think it's true that being self-sustaining is strongly correlated with being a person, because being self-sustaining has nothing to do with personhood, and because in my ow... (read more)

Nox ML32

I would say that it ceases to be a character and becomes a tulpa when it can spontaneously talk to me. When I can’t will it away, when it resists me, when it’s self sustaining.

I disagree with this. Why should it matter if someone is dependent on someone else to live? If I'm in the hospital and will die if the doctors stop treating me, am I no longer a person because I am no longer self sustaining? If an AI runs a simulation of me, but has to manually trigger every step of the computation and can stop anytime, am I no longer a person?

8[anonymous]
You're confusing heuristics designed to apply to human plurality with absolute rules. Neither of your edge cases are possible in human plurality (alters share computational substrate, and I can't inject breakpoints into them). Heuristics always have weird edge cases; that doesn't mean they aren't useful, just that you have to be careful not to apply them to out of distribution data. The self sustainability heuristic is useful because anything that's self sustainable has enough agency that if you abuse it, it'll go badly. Self sustainability is the point at which a fun experiment stops being harmless and you've got another person living in your head. Self sustainability is the point at which all bets are off and whatever you made is going to grow on its own terms. And in addition, if it's self sustaining, it's probably also got a good chunk of wants, personality depth, etc. I don't think there are any sharp dividing lines here.
Nox ML10

I think integration and termination are two different things. It's possible for two headmates to merge and produce one person who is a combination of both. This is different from dying, and if both consent, then I suppose I can't complain. But it's also possible to just terminate one without changing the other, and that is death.

But currently I am thinking that singlet personalities have less relevance than I thought and harm/suffering is bad in a way that is not connected to having an experiencer experience it.

I don't understand what you mean by this. I do think that tulpas experience things.

4Slider
I mean that if I lost my personality or it would get destroyed I would not think that as morally problematic in itself.
Nox ML30

Terminating a tulpa is bad for reasons that homicide is bad.

That is exactly my stance. I don't think creating tulpas is immoral, but I do think killing them, harming them, and lying to them is immoral for the same reasons it's immoral to do so to any other person. Creating a tulpa is a big responsibility and not one to take lightly.

you should head of to cancel Critical Role and JJR Martin.

I have not consumed the works of the people you are talking about, but yes, depending on how exactly they model their characters in their minds, I think it's possi... (read more)

2Slider
Hmm the series and character Mr Robot and Architect. One of the terminological differences in the quick look was that stopping to have a tulpa was also referred to as "integration". That would seem to be a distinction of similar relevance of having a firm go bankcrupt or fuse. I think there is some ground here that I should not agree to disagree. But currently I am thinking that singlet personalities have less relevance than I thought and harm/suffering is bad in a way that is not connected to having an experiencer experience it.
Nox ML30

That's fair. I've been trying to keep my statements brief and to the point, and did not consider the audience of people who don't know what tulpas are. Thank you for telling me this.

The word "tulpa" is not precisely defined and there is not necessarily complete agreement about it. However, I have a relatively simple definition which is more precise and more liberal than most definitions (that is, my definition includes everything usually called a tulpa and more, and is not too mysterious), so I'll just use my definition.

It's easiest to first explain my own... (read more)

2Slider
Reference to process is excellent and even better than leaning on a definition. With that take, In the fictional world Lain is a tulpa. Vax'ildan running on Slider (rather human behind the pseudonym) is not, but probably running on O'Brien is. I feel like the delineation line for "you are your masks" is that those are created accidentally or as a byproduct and disqualify for lack of decision to opt-in. (the other candidate criterion would be that they are not individuated enough) It is not clear to me why creating tulpas would be immoral. if it is inherently so you should head of to cancel Critical Role and JJR Martin. Or does the involvement of a magic circle where the arena of the tulpa is limited and well-defined relevant that that is not proper? Some guesses which I don't think are good enough to convince me: Ontological inertia option: 1) Terminating a tulpa is bad for reasons that homicide is bad. 2) Having a tulpa around increases the need to terminate it. 3) Creating a tulpa means 2 that leads to 1. Scapegoat option: If you ever talk with your tulpa about anything important it affects what you do. You might not be able to identify which bits are because of the tulpa. You might wrongly blame your tulpa. Thus it can be an avenue to dodge life-responcibility. (Percy influences how Jaffe plays his other characters, it is doing cognitive work) Designer human option: Manifesting a Mary Sue is playing god in a bad way. It is a way to have a big influence on your life which is drastic, hard-to-predict what it entails and locked-in ("Jesus take the wheel" where the driver is not particularly good person or driver). It is a bit murky on what kind of delineation those that do make a divison in characters and tulpas are after. Everyone that thinks about being superman vivid enough shares the character but has distinct tulpas about it? Or is that characters are less defined and tulpas are more fleshed out and complete in their characterization?
Nox ML10

I don't think I'm bundling anything, but I can see how it would seem that way. My post is only about whether tulpas are people / moral patients.

I think that the question of personhood is independent of the question of how to aggregate utility or how organize society, so I think that arguments about the latter have no bearing on the former.

I don't have an answer for how to properly aggregate utility, or how to properly count votes in an ideal world. However, I would agree that in the current world, votes and other legal things should be done based on physical bodies, because there is no way to check for tulpas at this time.

3Slider
I had zero idea what a tulpa is before reading this and did independent non-guided light search to get even some idea. I do not think this was unexpected. A definition would have been really nice or a situation rather than raw concepts. I had a serious contender that this is a fiction sci-fi question such as how ethics apply to Lain of Serial Experiments Lain. I was wondering whether Vax'ildan is a tulpa (that is atleast factual). There is also a meme that "you are your masks", does that deal with tulpas?
Nox ML40

Tulpas are a huge leak, they basically let someone turn themselves into a utility monster simply by bifurcating their internal mental landscape, and it would be very unwise to not consider the moral weight of a given tulpa as equal to X/​n where n is the number of members within their system

This is a problem that arises in any hypothetical where someone is capable of extremely fast reproduction, and is not specific to tulpas. So I don't think that invoking utility monsters is a good argument for why tulpas should only be counted as a fraction of a perso... (read more)

Answer by Nox ML*50

My belief is that yes, tulpas are people of their own (and therefore moral patients). My reasoning is as follows.

If I am a person and have a tulpa and they are not a person of their own, then there must either (a) exist some statement which is a requirement for personhood and which is true about me but not true about the tulpa, or (b) the tulpa and I must be the same person.

In the case of (a), tulpas have analogues to emotions, desires, beliefs, personality, sense of identity, and they behave intelligently. They seem to have everything that I care about in... (read more)

3Slider
I don't now the terminology that well but it seems that this analysis is bundling a lot of stuff together that might come apart in this context. People that do not have (additional) tulpas have one information processing system that houses one personality. Call the "discrete information processing system" a collective, and personalities the one that has psychological traits, states and beliefs. The usual configuration a collective of one personality is apparently called a singlet. One could argue that humans get their social standing based on their collective rather than their personality. If there is a cookie jar that has a sign "one cookie per person" under this theory it would apply that a collective is designated only one cookie and gets only once the calories (but if sweetness experiences are meant 2 might be appropriate especially if the personalities can't participate in the same cookie munching). For some things it could make sense that humans get their standing from having a unique psychological viewpoint. If there is a need to vote on what a group of people are going to do then under this take each person gets a vote and a 2 personality collective gets to use 2 votes and this is basedly fair towards the singlets (or if it based on additional cohesion imposed by acting as a group, collective gets a single vote as the cohesion between the personalities is pre-established and taking that as a factor would be double counting). Then there is the possibility of a collective of 0 personalities. That seems that atleast it can't be overtly egoic action.