All of woodchopper's Comments + Replies

If an exact copy of you were to be created, it would have to be stuck in the hole as well. If the 'copy' is not in the hole, then it is not you, because it is experiencing different inputs and has a different brain state.

and more specifically you should not find yourself personally living in a universe where the history of your experience is lost. I say this because this is evidence that we will likely avoid a failure in AI alignment that destroys us, or at least not find ourselves in a universe where AI destroys us all, because alignment will turn out to be practically easier than we expect it to be in theory.

Can you elaborate on this idea? What do you mean by 'the history of your experience is lost'? Can you supply some links to read on this whole theory?

0Gordon Seidoh Worley
Basically one way of understanding the consequences of https://en.wikipedia.org/wiki/Anthropic_principle and https://en.wikipedia.org/wiki/Many-worlds_interpretation

Could you qualify that statement?

Can you make an AGI given only primordial soup?

An AI will have a utility function. What utility function do you propose to give it?

What values would we give an AI if not human ones? Giving it human values doesn't necessarily mean giving it the values of our current society. It will probably mean distilling our most core moral beliefs.

If you take issue with that all you are saying is that you want an AI to have your values, rather than humanity's, as a whole.

Developing an AGI (and then ASI) will likely involve a serious of steps involving lower intelligences. There's already an AI arms race between several large technology companies and keeping your nose in front is already practiced because there's a lot of utility in having the best AI so far.

So it isn't true to say that it's simply a race without important intermediate steps. You don't just want to get to the destination first, you want to make sure your AI is the best for most of the race for a whole heap of reasons.

0RyanCarey
If your path to superintelligence spends lots of time in regions with intermediate-grade AIs that can generate power or intelligence, then that is true, so of course the phrase "arms race" aptly describes such situations. It's the case where people are designing a superintelligence "from scratch" that the term "arms race" seems inappropriate.

That's a partial list. It also takes good universities, a culture that produces a willingness to take risks, a sufficient market for good products, and I suspect a litany of other things.

I think once you've got a society that genuinely innovates started, it can be hard to kill that off, but it can be and has been done. The problem is, as you mentioned, very few societies have ever been particularly innovative.

It's easy to use established technology to build a very prosperous first world society. For example: Australia, Canada, Sweden. But it's much harder ... (read more)

0Lumifer
Yes, of course. This implies that there are good reasons for it. You can look at it in the exploit/explore framework and going full explore is rarely a good choice. Notably, betting on innovation produces a large variance of outcomes and you need to be sure you can survive that variance.

I think it's an interesting point about innovation actually being very rare, and I agree. It takes a special combination of things for to happen and that combination doesn't come around much. Britain was extremely innovative a few hundred years ago. In fact, they started the industrial revolution, literally revolutionising humanity. But today they do not strike me as particularly innovative even with that history behind them.

I don't think America's ability to innovate is coming to end all that soon. But even if America continues to prosper, will that mean... (read more)

0Lumifer
I don't know about that. People have been discussing how does an innovation hub (like Silicon Valley) appear and how one might create one -- that is a difficult problem, partially because starting a virtuous circle is hard. But general innovation in a society? Lemme throw in some factors off the top of my mind: * Low barriers to entry (to experimentation, to starting up businesses, etc.). That includes a permissive legal environment and a light regulatory hand. * A properly Darwinian environment where you live or die (quickly) by market success and not by whether you managed to bribe the right bureaucrat. * Relatively low stigma attached to failure * Sufficient numbers of high-IQ people who are secure enough to take risks * Enough money floating around to fund high-risk ventures * For basic science, enough money coupled with the willingness to throw it at very high-IQ people and say "Make something interesting with it"

You have failed to answer my question. Why does anything at all matter? Why does anything care about anything at all? Why don't I want my dog to die? Obviously, when I'm actually dead, I won't want anything at all. But there is no reason I cannot have preferences now regarding events that will occur after I am dead. And I do.

0Dagon
I think we can all agree that an entity's anticipated future experiences matter to that entity. I hope (but would be interested to learn otherwise) that imaginary events such as fiction don't matter. In between, there is a hugely wide range of how much it's worth caring about distant events. I'd argue that outside your light-cone is pretty close to imaginary in terms of care level. I'd also argue that events after your death are pretty unlikely to effect you (modulo basilisk-like punishment or reward). I actually buy the idea that you care about (and are willing to expend resources on) subjunctive realities on behalf of not-quite-real other people. You get present value from imagining good outcomes for imagined-possible people even if they're not you. This has to get weaker as it gets more distant in time and more tenuous in connection to reality, though. But that's not even the point I meant to make. Even if you care deeply about the far future for some reason, why is it reasonable to prefer weak, backward, stupid entities over more intelligent and advanced ones? Just because they're made of similar meat-substance as you seems a bit parochial, and hypocritical given the way you treat slightly less-capable organic beings like lettuce. Woodchopper's post indicated that he'd violently interfere with (indirectly via criminalization) activities that make it infinitesimally more likely to be identified and located by ETs. This is well beyond reason, even if I overstated my long-term lack of care.

In Australia we currently produce enough food for 60 million people. This is without any intensive farming techniques at all. This could be scaled up by a factor of ten if it was really necessary, but quality of life per capita would suffer.

I think smaller nations are as a general rule governed much better, so I don't see any positives in increasing our population beyond the current 24 million people.

Each human differs in their values. So it is impossible to build the machine of which you speak.

0Houshalter
But humans share a lot of values (e.g. wanting to live and not be turned into a dyson sphere.) And a collection of individuals may still have a set of values (see e.g. coherent extrapolated volition.)

Raid Google and shut them down immediately. Start a Manhattan project of AI safety research.

1Lumifer
I find your faith in the government's benevolence... disturbing.

I really like that you mention world government as an existential risk. It's one of the biggest ones. Competition is a very good risk reduction process. It has been said before that if we all lived in North Korea, it may well be that the future of humanity would be quite bleak indeed. North Korea is less stable now than it would be if it was the world's government because all sorts of outside pressure contribute to its instability (technology created by more free nations, pressure from foreign governments, etc).

No organisation can ever get it right all th... (read more)

0g_pepper
I agree with your concerns regarding one world government. However, I am curious why you think that the following were "chance developments" of Britain: rule of law, property rights, contracts, education, reading, writing. Pretty much all of those things were in use in multiple times/locales throughout the ancient world. Are you arguing that Britain originated those things? Or that they were developed in Britain independently of their prior existence elsewhere?
0ChristianKl
The outside world also contributes to it's stability. The current leader is educated in Switzerland and he might be a less rational actor if he would simply be educated at a North Korean school
0turchin
While world government may be x-risks if it make mistake, fighting several national states could also be x-risk, and I don't know what is better.

You might not care, but a lot of humans do care, and will continue to care. That's why we're discussing it.

1Dagon
A lot of humans care (or at least signal that they care in far-mode) about what happens in the future. That doesn't make it sane or reasonable. Why does it matter to anyone today whether the beings inhabiting Earth's solar system in 20 centuries are descended from apes, or made of silicon, or came from elsewhere?

There have been wars over land since humans have existed. And non interaction, even if initially widespread, clearly eventually stopped when it became clear the world wasn't infinite and that particular parts had special value and were contested by multiple tribes. Australia being huge and largely empty didn't stop European tribes from having a series of wars increasing in intensity until we had WW1 and WW2, which were unfathomably violent and huge clashes over ideology and resources. This is what happened in Europe, where multiple tribes of comparable st... (read more)

0TheAncientGeek
You write as though the amount of free land or buffer zone was constant, that is, as though the world population was constant. My point t was that walking in separate directions was a more viable option when the population was much lower...that, where available, it is usually an attractive option because it is like cost. That's a probabilistic argument. The point is probabilistic. There have always been wars, the question is how many. Do I really have to explain why Australia wasn't a buffer zone between European nations? On a planet, there is no guarantee that rival nations won't be cheek by jowl, but galactic civilisations are guaranteed to be separated by interstellar space. Given reasonable assumptions about the scarcity of intelligent life, and the light barrier, the situation is much better than it ever was on earth.
0Brillyant
This seems like very sound reasoning.

Remember also that viruses that kill lots of people tend to rapidly mutate into less lethal strains due to evolutionary pressures. This is what happened with the 1917 pandemic.

0James_Miller
Yes, but evolutionary pressures wouldn't be shaping bioterrorism created viruses in the short run. Also, until we can cure the common cold what's to prevent terrorists (in 10 years with CRISPR) from making a cold virus that's much more virulent, that stays hidden for a few months, and then kills its host.

Extremely low. I have never believed any sort of pathogen could come close to wiping us out. They can be defeated by basic breather and biohazard technology. But the main key is that with improved and more accessible biotechnology, our ability to create vaccines and other defence mechanisms against pathogens is greatly enhanced. I actually think the better biotechnology gets, the less likely any pathogen is to wipe us out, even given the fact that terrorists will be able to misuse it more easily.

0James_Miller
I hope you are right.

Kicking the can down the road doesn't seem to be a likely action of an intelligent civilisation.

Best to control us while they still can, or while the resulting war will not result in unparalleled destruction.

0morganism
Ah, you have been at Atomic Rockets, reading up on aliens? The only reason they came up with. http://www.projectrho.com/public_html/rocket/aliens.php "So what might really aged civilizations do? Disperse, of course, and also not attack new arrivals in the galaxy, for fear that they might not get them all. Why? Because revenge is probably selected for in surviving species, and anybody truly looking out for long-term interests will not want to leave a youthful species with a grudge, sneaking around behind its back..." This is why you want to have colonies and habitats outside the Sol system especially, https://www.researchgate.net/publication/283986931_The_Dark_Forest_Rule_One_Solution_to_the_Fermi_Paradox
0ChristianKl
Anything remotely resembling humans can't win a war against an extremely smart AI that had millions of years to optimize itself.
2TheAncientGeek
Why? Provide some reasoning. Non interaction was historically an option when the human population was much lower. Since the universe appears not to be densekey populated , my argument is that the same strategy would be favoured.

The development of Native Americans has been stunted and they simply exist within the controlled conditions imposed by the new civilization now. They aren't all dead, but they can't actually control their own destiny as a people. Native American reservations seem like exactly the sort of thing aliens might put us in. Very limited control over our own affairs in desolate parts of the universe with the addition of welfare payments to give us some sort of quality of life.

If we were rational, we would stop their continued self-directed development, because having a rapidly advancing alien civilisation with goals different to ours is a huge liability.

So maybe we would not wipe them out, but we would not let them continue on as normal.

Can someone here come up with any sort of realistic value system a foreign civilisation might have that would result in it not destroying the human race, or at least permanently stunting our continued development, should they become aware of us?

As has come to light with research on super intelligences, an actor does not have to hate us to destroy us, but rather realise we conflict, even in a very minor way, with its goals. As a rapidly advancing intelligent civilisation, it is likely our continued growth and existence will hamper the goals of other intelli... (read more)

6NancyLebovitz
Not being bored. Living systems (and presumably more so for living systems that include intelligence) show more complex behavior than dead systems.
0ChristianKl
"permanently stunting our continued development" might be the only way not to destroy the human race. It's not clear that we have a realistic change to develop capabilities that threaten a civilization that has a head start of 100 million years. In addition it's worth noting that a galactic civilization needs moral norms that allow societies that exist millions of light years apart and to coexist when it's not possible to attribute attacks to their sources. Hanson's argues in the Age of Em that Em's are likely religious and might follow religious norms. There are Buddhists who don't eat meat for religious reasons and in a similar way an alien civilization might not kill us for religious reasons. You don't need a special effort to broadcast signals for a civilization that cares about emerging species to listen to normal radio broadcasts.
0Dagon
I don't think humans as a species or earth creatures as a ... evolutionary life-root, have coherent goals or linear development in a way that makes this concern valid. If a more intelligent self-sustaining agent or group comes along and replaces humans, good. Whether that's future-humans, human-created AIs, or ETs doesn't matter all that much. Did the people of the 19th century make a mistake by creating and educating the next generations of humans which replaced them? As an aside, it's far too late to stop broadcasts. The marginal risk of discovery imposed by any action today is pretty much zero - we've been sending LOTS of EM outward in all directions for many many decades, and there's no way to recall any of it.
0Lumifer
I'm not sure what the "realistic" word is doing in here. Do you, by any chance, mean "one I can imagine"? I can imagine many things.
5Val
If we developed practical interstellar travel, and went to a star system with an intelligent species somewhat below our technological level, our first choice would probably not be annihilating them. Why? Because it would not fit into our values to consider exterminating them as the primary choice. And how did we develop our values like this? I guess at least in some part it's because we evolved and built our civilizations among plenty of species of animals, some of which we hunted for food (and not all of them to extinction, and even those which got extinct, wiping them out was not our goal), some of which we domesticated, and plenty of which we left alone. We also learned that other species besides us have a role in the natural cycle, and it was never in our interest to wipe out other species (unless in rare circumstances, when they were a pest or a dangerous disease vector). Unless the extraterrestrial species are the only macroscopic life-form on their planet, it's likely they evolved among other species and did not exterminate them all. This might lead to them having cultural values about preserving biodiversity and not exterminating species unless really necessary.

This doesn't seem very coherent.

As it happens, a perfect and truthful predictor has declared that you will choose torture iff you are alone.

OK. Then that means if I choose torture, I am alone. If I choose the dust specks, I am not alone. I don't want to be tortured, and don't really care about 3 ^^^ 3 people getting dust specks in their eyes, even if they're all 'perfect copies of me'. I am not a perfect utilitarian.

A perfect utilitarian would choose torture though, because one person getting tortured is technically not as bad from a utilitarian point of view as 3 ^^^ 3 dust specks in eyes.

I think a very interesting trait of humans is that we can for the most part collaboratively truth-seek on most issues, except those defined as 'politics', where a large proportion of the population, with varying IQs, some extremely intelligent, believe things that are quite obviously wrong to who anyone who has spent any amount of time seeking the truth on those issues without prior bias.

The ability for humans to totally turn off their rationality, to organise the 'facts' as they see them to confirm their biases, is nothing short of incredible. If humans t... (read more)

0Houshalter
Is this really true? It seems that humans have the capacity to endlessly debate many issues, without changing their minds. Including philosophy, religion, scientific debates, conspiracy theories, and even math, on occasion. Almost any subject can create deeply nested comment threads of people going back and forth debating. Hell I might even be starting one of those right now, with this comment. I don't think there's anything particularly special about politics. Lesswrong has gotten away with horribly controversial things before, like e.g. torture vs dust specks, or AI Risk, etc. There have even been political subjects on occasion. I'd just say it's off topic. I don't come to Lesswrong to read about politics. I get that from almost everywhere else. Lesswrong doesn't really have anything to add. But maybe if there is a political issue that either isn't too controversial, or isn't too mainstream, I wouldn't mind it being discussed here. E.g. there are sometimes discussions about genetically engineered babies, and that even fits well with other lesswrong subjects.
0Lumifer
That looks to me to be just false. A trivial counterexample: "It is difficult to get a man to understand something, when his salary depends on his not understanding it." -- Upton Sinclair.
1Gleb_Tsipursky
Yeah, good point there. That's why it might work in a small private setting of an LW meetup, but not so much on the open forum of LW.
6DanArmak
This is true not only connotationally (political topics cause humans to behave this way), but also denotationally: those topics which cause humans to behave this way, we call political (or 'tribal').

(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.

The negation of (1) and (2) are premises if the conclusion is (3). So when I say they are "true" I mean that, for example, in the first case, that humans WILL reach an advanced level of technological development. Probably a bit confusing, my mistake.

You seem to be saying that (2) is true -- that it is very unlikely that our post-human descendants will create a

... (read more)
1pragmatist
I don't think that's true. The SSA will have different consequences if the simulated minds are expected to be very different from ours. If we suppose that simulated minds will have very different observations, experiences and memories from our own, and we consider the hypothesis that the vast majority of minds in our universe will be simulated, then SSA simply disconfirms the hypothesis. If I should reason as if I am a random sample from the pool of all observers, then any theory which renders my observations highly atypical will be heavily disconfirmed. SSA will simply tell us it is unlikely that the vast majority of minds are simulated. Which means that either civilizations don't get to the point of simulating minds or they choose not to run a significant number of simulations. If, on the other hand, we suppose that a significant proportion of simulated minds will be quite similar to our own, with similar thoughts, memories and experiences, and we further assume that the vast majority of minds in the universe are simulated, then SSA tells us that we are likely simulated minds. It is only under those conditions that SSA delivers this verdict. This is why, when Bostrom describes the Simulation Argument, he focuses on "ancestor-simulations". In other words, he focuses on post-human civilizations running detailed simulations of their evolutionary history, not just simulations of any arbitrary mind. It is only under the assumption that post-human civilzations run ancestor-simulations that the SSA can be used to conclude that we are probably simulations (assuming that the other two possible conclusions of the argument are rejected). So i think it matters very much to the argument that the simulated minds are a lot like the actual minds of the simulators' ancestors. If not, the argument does not go through. This is why I said you seem to simply be accepting (2), the conclusion that post-human civilizations will not run a significant number of ancestor-simulations. Yo

We could have random number generators that choose the geometry an agent in our simulation finds itself in every time it steps into a new room. We could make the agent believe that when you put two things together and group them, you get three things. We could add random bits to an agent's memory.

There is no limit to how perverted a view of the world a simulated agent could have.

I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.

So I am struggling to understand his reply to my argument. In some ways it simply looks like he's saying either we are in a simulation or we are not, which is obviously true. The claim that we are probably living in a simulation (given a couple of assumptions) relies on observations of the current universe, which either are not reliable if we are in a simulation, or obviously are wrong if we aren't in a simulation.

If I conclude that there ar... (read more)

2qmotus
Just wanted to point out that this is not necessarily true; in a large enough multiverse, there would be many identical copies of a mind, some of which would probably be "real minds" dwelling in "real brains", and some would be simulated.
1pragmatist
(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation. Right. When I say "his conclusion is still true", I mean the conclusion that at least one of (1), (2) or (3) is true. That is the conclusion of the simulation argument, not "we are living in a simulation". This, I think, is a possible difference between your position and Bostrom's. You might be denying the Self-Sampling Assumption, which he accepts, or you might be arguing that simulated and unsimulated minds should not be considered part of the same reference class for the purposes of the SSA, no matter how similar they may be (this is similar to a point I made a while ago about Boltzmann brains in this rather unpopular post). I actually suspect that you are doing neither of these things, though. You seem to be simply denying that the minds our post-human descendants will simulate (if any) will be similar to our own minds. This is what your game AI comparisons suggest. In that case, your argument is not incompatible with Bostrom's conclusion. Remember, the conclusion of the simulation argument is that either (1), (2), or (3) is true. You seem to be saying that (2) is true -- that it is very unlikely that our post-human descendants will create a significant number of highly accurate simulations of their descendants. If that's all you're claiming, then you're not disagreeing with the simulation argument.

No. Think about what sort of conclusions an AI in a game we make would come to about reality. Pretty twisted, right?

0RowanE
It sounds like you expect it to be obvious, but nothing springs to mind. Perhaps you should actually describe the insane reasoning or conclusion that you believe follows from the premise.

The "simulation argument" by Bostrom is flawed. It is wrong. I don't understand why a lot of people seem to believe in it. I might do a write up of this if anyone agrees with me, but basically, you cannot reason about without our universe from within our universe. It doesn't make sense to do so. The simulation argument is about using observations from within our own reality to describe something outside our reality. For example, simulations are or will be common in this universe, therefore most agents will be simulated agents, therefore we are s... (read more)

1TheOtherDave
Hm. Let me try to restate that to make sure I follow you. Consider three categories of environments: (Er) real environments, (Esa) simulated environments that closely resemble Er, aka "ancestral simulations", and (Esw) simulated environments that dont't closely resemble Er, aka "weird simulations." The question is, is my current environment E in Er or not? Bostrom's argument as I understand it is that if post-human civilizations exist and create many Esa-type environments, then for most E, (E in Esa) and not (E in Er). Therefore, given that premise I should assume (E in Esa). Your counterargument as I understand it is that if (E in Esw) then I can draw no sensible conclusions about Er or Esa, because the logic I use might not apply to those domains, so given that premise I should assume nothing. Have I understood you?

First, Bostrom is very explicit that the conclusion of his argument is not "We are probably living in a simulation". The conclusion of his argument is that at least one of the following three claims is very likely to be true -- (1) humans won't reach the post-human stage of technological development, (2) post-human civilizations will not run a significant number of simulations of their ancestral history, or (3) we are living in a simulation.

Second, Bostrom has addressed the objection you raise here (in his Simulation Argument FAQ, among other pla... (read more)

4MrMind
While I do not agree on the conclusion of the simulation argument, I think your rebuttal is flawed: we can safely reason about the reality outside simulation if we presume that we are inside a realistic simulation, that is a simulation whose purpose is to mimic as closely as possible the reality outside. I don't know if it's made explicit in the exposition you read, but I've always assumed the argument was about a realistic simulation. Indeed, if the law of physics are computable, you can have even have an emulation argument.
2bogus
Of course you can. Anyone who talks about any sort of 'multiverse' - or even causally disconnected regions of 'our own universe' - is doing precisely this, whether they realize it or not.

I think I agree with what you're saying for the most part. If your goal is, say, reducing suffering, then you have to consider the best way of convincing others to share your goal. If you started killing people who ran factory farms, you're probably going to turn a lot of the world against you, and so fail in your goal. And, you have to consider the best way of convincing yourself to continue performing your goal, now and into the future, since humans goals can change depending on circumstances and experiences.

In terms of guilt, finding little tricks to r... (read more)

2Gram_Stone
Ah, I assumed the guilt would demotivate on net. Maybe it depends on how strongly you identify with utilitarian ideas.

You have to consider that humans don't have perfect utility functions. Even if I want to be a moral utilitarian, it is a fact that I am not. So I have to structure my life around keeping myself as morally utilitarian as possible. Brian Tomasik talks about this. It might be true that I could reduce more suffering by not eating an extra donut, but I'm going to give up on the entire task of being a utilitarian if I can't allow myself some luxuries.

4Gram_Stone
This is actually just the sort of thing that I'm trying to say. I'm saying that when you understand guilt as a source of information, and not a thing that you need to carry around with you after you've learned everything you can from it, then you can take the weight off of your shoulders. I'm saying that maybe if more people did this, it wouldn't be as hard to do extraordinary kinds of good, because you wouldn't constantly be feeling bad about what you conceivably aren't doing. Most of what people consider conceivable would require an unrealistic sort of discipline. Punishing people likely just reduces the amount of good that they can actually do. Am I right that we seem to agree on this?

What you are saying doesn't follow from the premises, and is about as accurate as me saying that magic exists and Harry Potter casts a spell on too-advanced civilisations.

Why would us launching a simulation use more processing power? It seems more likely that the universe does a set amount of information processing and all we are doing is manipulating that in constructive ways. Running a computer doesn't process more information than the wind blowing against a tree does; in fact, it processes far less.

So, the graph model of identity sort of works, but I feel it doesn't quite get to the real meat of identity. I think the key is in how two vertices of the identity graph are linked and what it means for them to be linked. Because I don't think the premise that a person is the same person they were a few moments ago is necessarily justified, and in some situations it doesn't meld with intuition. For example, a person's brain is a complex machine; imagine it were (using some extremely advanced technology) modified seriously while a person was still conscious... (read more)

Wouldn't there, then, be some copies of me not being tortured and one that is being tortured?

If I copied your brain right now, but left you alive, and tortured the copy, you would not feel any pain (I assume). I could even torture it secretly and you would be none the wiser.

So go back to the scenario - you're killed, there are some exact copies made of your brain and some inexact copies. It has been shown that it is possible to torture an exact copy of your brain while not torturing 'you', so surely you could torture one or all of these reconstructed brains and you would have no reason to fear?

0qmotus
Well.. Let's say I make a copy of you at time t. I can also make them forget which one is which. Then, at time t + 1, I will tickle the copy a lot. After that, I go back in time to t - 1, tell you of my intentions and ask you if you expect to get tickled. What do you reply? Does it make any sense to you to say that you expect to experience both being and not being tickled?

So, let's say you die, but a super intelligence reconstructs your brain (using new atoms, but almost exactly to specification), but misplaces a couple of atoms. Is that 'you'?

If it is, let's say the computer then realises what it did wrong and reconstructs your brain again (leaving its first prototype intact), this time exactly. Which one is 'you'?

Let's say the second one is 'you', and the first one isn't. What happens when the computer reconstructs yet another exact copy of your brain?

If the computer told you it was going to torture the slightly-wrong cop... (read more)

0qmotus
1. Maybe; it would probably think so, at least if it wasn't told otherwise. 2. Both would probably think so. 3. All three might think so. 4. I find that a bit scary. 5. Wouldn't there, then, be some copies of me not being tortured and one that is being tortured?

Why would something that is not atom to atom exactly what you are now be 'you'?

I think consciousness arises from physical processes (as Denett says), but that's not really solving the problem or proving it doesn't exist.

Anyway, I think you are right in that if you think being mind-uploaded does or does not constitute continuing your personal identity or whatever, it's hard to say you are wrong. However, what if I don't actually know if it does, yet I want to be immortal? Then we have to study that to figure out what things we can do keep the real 'us' existing and what don't.

What if the persistence of personal identity is a meaningless pursuit?

0qmotus
Let's suppose that the contents of a brain are uploaded to a computer, or that a person is anesthesized and a single atom in their brain is replaced. What exactly would it mean to say that personal identity doesn't persist in such situations?

If there's no objective right answer, then what does it mean to seek immortality? For example, if we found out that a simulation of 'you' is not actually 'you', would seeking immortality mean we can't upload our minds to machines and have to somehow figure out a way to keep the pink fleshy stuff that is our current brains around?

If we found out that there's a new 'you' every time you go to sleep and wake up, wouldn't it make sense to abandon the quest for immortality as we already die every night?

(Note, I don't actually think this happens. But I think the concept of personal identity is inextricably linked to the question of how separate consciousnesses, each feeling their own qualia, can arise.)

0qmotus
If there's no objective right answer, you can just decide for yourself. If you want immortality and decide that a simulation of 'you' is not actually 'you', I guess you ('you'?) will indeed need to find a way to extend your biological life. If you're happy with just the simulation existing, then maybe brain uploading or FAI is the way to go. But we're not going to "find out" the right answer to those questions if there is no right answer. Are you talking about the hard problem of consciousness? I'm mostly with Daniel Dennett here and think that the hard problem probably doesn't actually exist (but I wouldn't say that I'm absolutely certain about this), but if you think that the hard problem needs to be solved, then I guess this identity business also becomes somewhat more problematic.

Can you elaborate on the concept of a connection through "moment-to-moment identity"? Would for example "mind uploading" break such a thing?

0Kyre
Heh, that was really just me trying to come up with a justification for shoe-horning a theory of identity into a graph formalism so that Konig's Lemma applied :-) If I were to try to make a more serious argument it would go something like this. Defining identity, whether two entities are 'the same person' is hard. People have different intuitions. But most people would say that 'your mind now' and 'your mind a few moments later' are do constitute the same person. So we can define a directed graph with verticies as mind states (mind states would probably have been better than 'observer moments') with outgoing edges leading to mind states a few moments later. That is kind of what I meant by "moment-by-moment" identity. By itself it is a local but not global definition of identity. The transitive closure of that relation gives you a global definition of identity. I haven't thought about whether its a good one. In the ordinary course of events these graphs aren't very interesting, they're just chains coming to a halt upon death. But if you were to clone a mind-state and put it into two different environments, they that would give you a vertex with out-degree greater than one. So mind-uploading would not break such a thing, and in fact without being able to clone a mind-state, the whole graph-based model is not very interesting. Also, you could have two mind states that lead to the same successor mind state - for example where two different mind states only differ on a few memories, which are then forgotten. The possibility of splitting and merging gives you a general (directed) graph structured identity. (On a side-note, I think generally people treat splitting and merging of mind states in a way that is way too symmetrical. Splitting seems far easier - trivial once you can digitize a mind-state. Merging would be like a complex software version control problem, and you'd need very carefully apply selective amnesia to achieve it.) So, if we say "immortality" is h

The thing is, I'm just not sure if it's even a reasonable thing to talk about 'immortality' because I don't know what it means for one personal identity ('soul') to persist. I couldn't be sure if a computer simulated my mind it would be 'me', for example. Immortality will likely involve serious changes to the physical form our mind takes, and once you start talking about that you get into the realm of thought experiments like the idea that if you put someone under a general anaesthetic, take out one atom from their brain, then wake them up, you have a simi... (read more)

0qmotus
Isn't it purely a matter of definition? You can say that a version of you with one atom of yourself is you or that it isn't; or that a simulation of you either is or isn't you; but there's no objective right answer. It is worth nothing, though, that if you don't tell the different-by-one-atom version, or the simulated version, of the fact, they would probably never question being you.

Currently it's pretty commonly believed that the end state of the universe is decayed particles moving away from every other particle at faster than the speed of light, therefore existing in an eternal and inescapable void. If you only have one particle you can't do calculations.

0Yosarian2
That's one possibility. It depends what the value of dark energy is, which isn't yet known.

What does it mean to be immortal? We haven't solved key questions of personal identity yet. What is it for one personal identity to persist?

0turchin
It is good question. The problem of personal identity is one of most complex, like aging. I am working on the map of identity solutions, and it is very large. If the decide that identity has definition I, the death os abrupt disappearance of I. And immortality is idea that death never happens. It seems that this definition of immortality doesn't depends of definition of identity. But practically the more fragile is identity, the more probable is death.

If you define yourself by the formal definition of a general intelligence then you're probably not going to go too far wrong.

That's what your theory ultimately entails. You are saying that you should go from specific labels ("I am a democrat") to more general labels (" I am a seeker of accurate world models") because it is easier to conform to a more general specification. The most general label would be a formal definition of what it means to think and act on an environment for the attainment of goals.

I don't think your theory is particularly useful.