My view is compatible with the existence of actual infinities within the physical universe. One potential source of infinity is, as you say, the possibility of infinite subdivision of spacetime. Another is the possibility that spacetime is unboundedly large. I don't have strong opinions one way or another on if these possibilities are true or not.
The assumption is that everything is made up of small physical parts. I do not assume or believe that it's easy to predict the large physical systems from those small physical parts. But I do assume that the behavior of the large physical systems is determined solely from their smaller parts.
The tautology is that any explanation about large-scale behavior that invokes the existence of things other than the small physical parts must be wrong, because those other things cannot have any effect on what happens. Note that this does not mean that we need to desc...
I completely agree that reasoning about worlds that do not exist reaches meaningful conclusions, though my view classifies that as a physical fact (since we produce a description of that nonexistent world inside our brains, and this description is itself physical).
it becomes apparent that if our physical world wasn’t real in a similar sense, literally nothing about anything would change as a result.
It seems to me like if every possible world is equally not real, then expecting a pink elephant to appear next to me after I submit this post seems just as ...
Reasoning being real and the thing it reasons about being real are different things.
I do agree with this, but I am very confused about what your position is. In your sibling comment you said this:
Possibly the fact that I perceive the argument about reality of physics as both irrelevant and incorrect (the latter being a point I didn’t bring up) caused this mistake in misperceiving something relevant to it as not relevant to anything.
The existence of physics is a premise in my reasoning, which I justify (but cannot prove) by using the observation that...
Okay, let's forget the stuff about the "I", you're right that it's not relevant here.
For existence in the sense that physics exists, I don’t see how it’s relevant for reasoning, but I do see how it’s relevant to decision making
Okay, I think my view actually has some interesting things to say about this. Since reasoning takes place in a physical brain, reasoning about things that don't exist can be seen as a form of physical experiment, where your brain builds a description which has properties which we assume the thing that doesn't exist would have if ...
I don't say in this post that everything can be deduced from bottom up reasoning.
The fact that I live in a physical world is just a fact that I've observed, it's not a part of my values. If I lived in a different world where the evidence pointed in a different direction, I would reason about the different direction instead. And regardless of my values, if I stopped reasoning about the physical world, I would die, and this seems to me to be an important difference between the physical world and other worlds I could be thinking about.
Of course this is predicated on the concept of "I" being meaningful. But I think that this is better supported by my observations than the idea that every possible world exists and the idea that probability just represents a statement about my values.
clearly physical brains can think about non physical things.
Yes, but this is not evidence for the existence of those things.
But it’s not conclusive in every case, because the simplest adequate explanation need not be a physical explanation.
There is one notion of simplicity where it is conclusive in every case: every explanation has to include physics, and then we can just cut out the extra stuff from the explanation to get one that postulates strictly less things and has equally good predictions.
But you're right, there are other notions of simple fo...
Whatever "built on top of" means.
In ZFC, the Axiom of Infinity can be written entirely in terms of ∈, ∧, ¬, and ∀. Since all of math can be encoded in ZFC (plus large cardinal axioms as necessary), all our knowledge about infinity can be described with ∀ as our only source of infinity.
Only for the subset of maths that’s also physical. You can’t resolve the Axiom of Choice problem that way.
You can't resolve the Axiom of Choice problem in any way. Both it and its negation are consistent.
Again: every mathematical error is a real physical even in someone’s brain, so , again, physics guarantees nothing.
I don't get what you're trying to show with this. If I mistakenly derive in Peano Arithmetic that 2 + 2 = 3, I will find myself shocked when I put 2 apples inside a bag that already contains 2 apples and find that there are now 4 apples in that bag. Incorrect mathematical reasoning is physically distinguishible from correct mathematical reasoning.
There are of course, lots of infinities in maths.
Everything we know about all other infinities can be built on top of just FORALL in first-order logic.
Sure, I think I agree. My point is that because all known reasoning takes place in physics, we don't need to assume that any of the other things we talk about exist in the same way that physics does.
I even go a little further than that and assert that assuming that any non-physical thing exists is a mistake. It's a mistake because it's impossible for us to have evidence in favor of its existence, but we do have evidence against it: that evidence is known as Occam's Razor.
Physics doesn’t guarantee that mathematical reasoning works.
All of math can be built on top of first-order logic. In the sub-case of propositional logic, it's easy to see entirely within physics that if I observe that "A AND B" corresponds to reality, then when I check if "A" corresponds to reality, I will also find that it does. Every such deduction in propositional logic corresponds to something you can check in the real physical world.
The only infinity in first-order logic are quantifiers, of which only one is needed: FORALL, which is basically just ...
I haven't used the word "reduce" since you gave a definition of it in the other thread which didn't match the precise meaning I was aiming for. The meaning I am aiming for is given in this paragraph from this post:
...If we take as assumption that everything humans have observed has been made up of smaller physical parts (except possibly for the current elementary particles du jour, but that doesn’t matter for the sake of this argument) and that the macro state is entirely determined by the micro state (regardless of if it’s easy to compute for humans), ther
There are answers to that question.
If you don't mind, I would be interested in a link to a place that gives those answers, or at least a keyword to look up to find such answers.
Well if you're not saying it, then I'm saying it: this is a mysterious fact about physics ;P
I interpreted "which is not the same as being some sort of refutation" as being disagreement, and I knew my use of the word "contradicts" was not entirely correct according to its definition, but I couldn't think of a more accurate word so I figured it was "close enough" and used it anyway (which is a bad communication habit I should probably try to overcome, now that I'm explicitly noticing it). I'm sorry if I came across harshly in my comment.
I disagree that what you're saying contradicts what I'm saying. The physical world is ordered in such a way that the reasoning you described works: this is a fact about physics. You are correct that it is a mysterious fact about physics, but positing the existence of math does not help explain it, merely changes the question from "why is physics ordered in this way" to "why is mathematics ordered in this way".
This is fair, though the lack of experiments showing the existence of anything macro that doesn't map to sub-micro state also adds a lot of confidence, in my opinion, since the amount of hours humans have put into performing scientific experiments is quite high at this point.
Generally I'd say that the macro-level irrelevance of an assumption means that you can reject it out of hand, and lack of micro-level modelling means that there is work to be done until we understand how to model it that way.
If you accept that the existence of mathematical truths beyond physical truths cannot have any predictive power, then how do you reconcile that with this previous statement of yours:
Presupposing things without evidence
As you can see, I am not doing that.
I will say again that I don't reject any mathematics. Even 'useless' mathematics is encoded inside physical human brains.
If we take as assumption that everything humans have observed has been made up of smaller physical parts (except possibly for the current elementary particles du jour, but that doesn't matter for the sake of this argument) and that the macro state is entirely determined by the micro state (regardless of if it's easy to compute for humans), there is a simple conclusion that follows logically from that.
This conclusion is that nothing extraphysical can have any predictive power above what we can predict from knowledge about physics. This follows because for s...
This is only correct if we presuppose that the concept of mathematically true is a meaningful thing separate from physics. The point this post is getting at is that we can still accept all human mathematics without needing to presuppose that there is such a thing. Since not presupposing this is strictly simpler, and presupposing it does not give us any predictive power, we ought not to assume that mathematics exists separately from physics.
This is not just a trivial detail. Presupposing things without evidence is the same kind of mistake as Russell's teapot, and small mistakes like that will snowball into larger ones as you build your philosophy on top of them.
I agree that they are not symmetrical. My point with that thought experiment was to counter one of their arguments, which as I understand it can be paraphrased to:
In your thought experiment, the people who bet that they are in the last 95% of humans only win in aggregate, so there is still no selfish reason to think that taking that bet is the best decision for an individual.
My thought experiment with the dice was meant to show that this reasoning also applies to regular expected utility maximization, so if they use that argument to dismiss all anthrop...
You do this 100 times, would you say you ought to find your number >5 about 95 times?
I actually agree with you that there is no single answer to the question of "what you ought to anticipate"! Where I disagree is that I don't think this means that there is no best way to make a decision. In your thought experiment, if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.
My justification for this is that objectively, those who make decisions this way will tend to have more reward a...
By pretty much every objective measure, the people who accept the doomsday argument in my thought experiment do better than those who don't. So I don't think it takes any additional assumptions to conclude that even selfish people should say yes.
From what I can tell, a lot of your arguments seem to be applicable even outside anthropics. Consider the following experiment. An experimenter rolls a fair 100-sided die. Then they ask someone to guess if they rolled a number >5 or not, giving them some reward if they guess correctly. Then they reroll and ask a...
Suppose when you are about to die, time freezes, and Omega shows up and tells you this: "I appear once to every human who has ever lived or will live, right when they are about to die. Answer this question with yes or no: are you in the last 95% of humans who will ever live in this universe? If your answer is correct, I will bring you to this amazing afterlife that I've prepared. If you guess wrong, you get nothing." Do you say yes or no?
Let's look at actual outcomes here. If every human says yes, 95% of them get to the afterlife. If every human says no, 5...
I like the distinctions you make between sentient, sapient, and conscious. I would like to bring up some thoughts about how to choose a morality that I think are relevant to your points about death of cows and transient beings, which I disagree with.
I think that when choosing our morality, we should do so under the assumption that we have been given complete omnipotent control over reality and that we should analyze all of our values independently, not taking into consideration any trade-offs, even when some of our values are logically impossible to satisf...
The reason I reject all the arguments of the form "mental models are embedded inside another person, therefore they are that person" is that this argument is too strong. If a conscious AI was simulating you directly inside its main process, I think you would still qualify as a person of your own, even though the AI's conscious experience would contain all your experiences in much the same way that your experience contains all the experiences of your character.
I also added an addendum to the end of the post which explains why I don't think it's safe to assume that you feel everything your character does the same way they do.
I think we just have different values. I think death is bad in itself, regardless of anything else. If someone dies painlessly and no one ever noticed that they had died, I would still consider it bad.
I also think that truth is good in and of itself. I want to know the truth and I think it's good in general when people know the truth.
Here, I technically don’t think you’re lying to the simulated characters at all—in so far as the mental simulation makes them real, it makes the fictional world, their age, and their job real too.
Telling the truth to a men...
Points similar to this have come up in many comments, so I've added an addendum at the end of my post where I give my point of view on this.
I can definitely create mental models of people who have a pain-analogue which affects their behavior in ways similar to how pain affects mine, without their pain-analogue causing me pain.
there’s no point on reducing this to a minimal Platonic concept of ‘simulating’ in which simulating excruciating pain causes excruciating pain regardless of physiological effects.
I think this is the crux of where we disagree. I don't think it matters if pain is "physiological" in the sense of being physiologically like how a regular human feels pain. I only care if th...
I don't personally think I'm making this mistake, since I do think that saying "the conscious experience is the data" actually does resolve my confusion about the hard problem of consciousness. (Though I am still left with many questions.)
And if we take reductionism as a strongly supported axiom (which I do), then necessarily any explanation of consciousness will have to be describable in terms of data and computation. So it seems to me that if we're waiting for an explanation of experience that doesn't boil down to saying "it's a certain type of data and computation", then we'll be waiting forever.
My best guess about what you mean is that you are referring to the part in the "Ethics" section where I recommend just not creating such mental models in the first place?
To some extent I agree that mortality doesn't mean it should've never lived, and indeed I am not against having children. However, after stumbling on the power to create lives that are entirely at my mercy and very high-maintenance to keep alive, I became more deontological about my approach to the ethics of creating lives. I think it's okay to create lives, but you must put in a best effo...
I wouldn't quite say it's a typical mind fallacy, because I am not assuming that everyone is like me. I'm just also not assuming that everyone is different from me, and using heuristics to support my inference that it's probably not too uncommon, such as reports by authors of their characters surprising them. Another small factor in my inference is the fact that I don't know how I'd write good fiction without making mental models that qualified as people, though admittedly I have very high standards with respect to characterization in fiction.
(I am aware t...
The reason I care if something is a person or not is that "caring about people" is part of my values. I feel pretty secure in taking for granted that my readers also share that value, because it's a pretty common one and if they don't then there's nothing to argue about since we just have incompatible utility functions.
What would be different if it were or weren’t, and likewise what would be different if it were just part of our person-hood?
One difference that I would expect in a world where they weren't people is that there would be some feature you c...
I do not think that literally any mental model of a person is a person, though I do draw the line further than you.
What are your reasons for thinking that mental models are closer to markov models than tulpas? My reason for leaning more on the latter side is my own experience writing, where I found it easy to create mental models of characters who behaved coherently and with whom I could have long conversations on a level above even GPT4, let alone markov models.
Another piece of evidence is this study. I haven't done any actual digging to see if the method...
I disagree that it means that all thinking must cease. Only a certain type of thinking, the one involving creating sufficiently detailed mental models (edit: of people). I have already stopped doing that personally, though it was difficult and has harmed my ability to understand others. Though I suppose I can't be sure about what happens when I sleep.
Still, no, I don't want everyone to die.
That's right. It's why I included the warning at the top.
One of my difficulties with this is that it seems to contradict one of my core moral intuitions, that suffering is bad. It seems to contradict it because I can inflict truly heinous experiences onto my mental models without personally suffering for it, but your point of view seems to imply that I should be able to write that off just because the mental model happens to be continuous in space-time to me. Or am I misunderstanding your point of view?
To give an analogy and question of my own, what would you think about an alien unaligned AI simulating a human ...
Your heuristic is only useful if it's actually true that being self-sustaining is strongly correlated with being a person. If this is not true, then you are excluding things that are actually people based on a bad heuristic. I think it's very important to get the right heuristics: I've been wrong about what qualified as a person before, and I have blood on my hands because of it.
I don't think it's true that being self-sustaining is strongly correlated with being a person, because being self-sustaining has nothing to do with personhood, and because in my ow...
I would say that it ceases to be a character and becomes a tulpa when it can spontaneously talk to me. When I can’t will it away, when it resists me, when it’s self sustaining.
I disagree with this. Why should it matter if someone is dependent on someone else to live? If I'm in the hospital and will die if the doctors stop treating me, am I no longer a person because I am no longer self sustaining? If an AI runs a simulation of me, but has to manually trigger every step of the computation and can stop anytime, am I no longer a person?
I think integration and termination are two different things. It's possible for two headmates to merge and produce one person who is a combination of both. This is different from dying, and if both consent, then I suppose I can't complain. But it's also possible to just terminate one without changing the other, and that is death.
But currently I am thinking that singlet personalities have less relevance than I thought and harm/suffering is bad in a way that is not connected to having an experiencer experience it.
I don't understand what you mean by this. I do think that tulpas experience things.
Terminating a tulpa is bad for reasons that homicide is bad.
That is exactly my stance. I don't think creating tulpas is immoral, but I do think killing them, harming them, and lying to them is immoral for the same reasons it's immoral to do so to any other person. Creating a tulpa is a big responsibility and not one to take lightly.
you should head of to cancel Critical Role and JJR Martin.
I have not consumed the works of the people you are talking about, but yes, depending on how exactly they model their characters in their minds, I think it's possi...
That's fair. I've been trying to keep my statements brief and to the point, and did not consider the audience of people who don't know what tulpas are. Thank you for telling me this.
The word "tulpa" is not precisely defined and there is not necessarily complete agreement about it. However, I have a relatively simple definition which is more precise and more liberal than most definitions (that is, my definition includes everything usually called a tulpa and more, and is not too mysterious), so I'll just use my definition.
It's easiest to first explain my own...
I don't think I'm bundling anything, but I can see how it would seem that way. My post is only about whether tulpas are people / moral patients.
I think that the question of personhood is independent of the question of how to aggregate utility or how organize society, so I think that arguments about the latter have no bearing on the former.
I don't have an answer for how to properly aggregate utility, or how to properly count votes in an ideal world. However, I would agree that in the current world, votes and other legal things should be done based on physical bodies, because there is no way to check for tulpas at this time.
Tulpas are a huge leak, they basically let someone turn themselves into a utility monster simply by bifurcating their internal mental landscape, and it would be very unwise to not consider the moral weight of a given tulpa as equal to X/n where n is the number of members within their system
This is a problem that arises in any hypothetical where someone is capable of extremely fast reproduction, and is not specific to tulpas. So I don't think that invoking utility monsters is a good argument for why tulpas should only be counted as a fraction of a perso...
My belief is that yes, tulpas are people of their own (and therefore moral patients). My reasoning is as follows.
If I am a person and have a tulpa and they are not a person of their own, then there must either (a) exist some statement which is a requirement for personhood and which is true about me but not true about the tulpa, or (b) the tulpa and I must be the same person.
In the case of (a), tulpas have analogues to emotions, desires, beliefs, personality, sense of identity, and they behave intelligently. They seem to have everything that I care about in...
It's this one.
Given that you're asking this question, I still haven't been clear enough. I'll try to explain it one last time. This time I'll talk about Conway's Game of Life and AI. The argument will carry over straightforwardly to physics and humans. (I k... (read more)