All of turchin's Comments + Replies

I have links to old LW posts in some articles and other places. What will happen with all these links?

3Ben Pace
ALL THE LINKS WILL CONTINUE TO WORK. WE MADE SURE OF THIS. :-)

It should be able to understand human language or it is (most likely) not dangerous.

I am getting this comment in different forms many times, so I will state again 3 main reasons why it is not exactly true:

1) The message is addressed to Young AI, not to full-blown superintelligence. Such Young AI will be time constraint but could have to make some pivotal act.

2) The message includes some our commitments, like Nelson attack and AI-Elf deal, so it provides some useful information to the AI about portability that it is in the simulation and some other useful things.

3) If AI is fully aware of all these reasons, it already created my mind mode... (read more)

turchin-10

Why? if we have fullblown papercliper, which is coming of age, we are already doomed, and starting its bullshit detector will not make our situation any worse.

0Luke_A_Somers
… you don't think that pissing away credibility could weaken the arguments? I think presenting those particular arguments is more likely to do that than it is to work.

In fact, I am not describing here how to build such a trap, but I express our commitment to build the trap, in order to install uncertainty into any future AI if the trap exists and what it could be.

I think that real constructors of the trap will find the correct ways to make it effective and balance false positives with needed level of control.

It will kill humanity not because it will be annoyed, but for two main goals: its own safety, or to use human atoms. Other variants also possible, I explored them here: http://lesswrong.com/lw/mgf/a_map_agi_failures_modes_and_levels/

0RedMan
Assertion: Any fooming non-human AI incompatible with uplifting technology would be too alien to communicate with in any way. If you happen to see one of those, probably a good idea to just destroy it on sight.

Surely, most arguments are weak, but it is not a problem, as we try all possible independent arguments, hoping that at least one will work. So, there is no reason to ignore putting weak arguments in the text of the message.

However, exactly this argument could be made stronger. It is intended to prevent FIRST thought about disobedience in AI. Thus AI, which acts based on this message, didn't think much about disobedience and its limits, and is not "superintelligent" in this topic, but rather clueless.

I reformulated the idea as following: AI can't... (read more)

0Lumifer
So basically you have a trap which kills you the moment you become aware of it. The first-order effect will be a lot of random deaths from just blundering into such a trap while walking around. I suspect that the second-order effect will be the rise of, basically, superstitions and some forms of magical thinking which will be able to provide incentives to not go "there" without actually naming "there". I am not sure this is a desirable outcome.
0Luke_A_Somers
I suspect that an AI will have a bullshit detector. We want to avoid setting it off.

Another cash level is drawing visual map and when use visual field as a cash of all the elements of the problem.

Thanks! It is nice that we come to the wording "young AI" more or less independently. I tried to maximally stealmanning this approach and collect all ideas about messaging to the future AI. I will add your post in the bibliographic section.

I think a lot of people are still working on other aspects of AI safety, like value alignment and containment. This approach is just the last line of defence.

2Luke_A_Somers
See: my first post on this site.

Thanks, it was not clear to me that it is not visible to non-members.

New link on google drive - also commenting is open. https://docs.google.com/document/d/15D71qhhY-ZsAY7syzZsr1lKopTODbdeXVPElaPaIqyA/edit

This is our accepted chapter in the edited volume "AI Safety and Security" (Roman Yampolskiy, ed.), CRC Press. Forthcoming, 2018

0entirelyuseless
People are weakly motivated because even though they do things, they notice that for some reason they don't have to do them, but could do something else. So they wonder what they should be doing. But there are basic things that they were doing all along because they evolved to do them. AIs won't have "things they were doing", and so they will have even weaker motivations than humans. They will notice that they can do "whatever they want" but they will have no idea what to want. This is kind of implied by what I wrote here: except that it is about human beings.

Memetic hazard - dangerous habit.

I have unpublished text on the topic and will put a draft online in the next couple of weeks, and will apply it to the competition. I will add URL here when it will be ready.

Update: My entry is here: https://www.lesserwrong.com/posts/CDWsjQr8KDuj69fTJ/message-to-any-future-ai-there-are-several-instrumental

Will the posts here be deleted or will their URLs change? I have some useful URLs here and they are linked in published scientific articles, so if the site will be demolished they will not work, and I hope it will not happen.

7Elo
Urls will be preserved

I solved lucid dreaming around a year ago after finding that megadosing of galantamine before sleep (16 mg) almost sure will produce LD and out-of-body experiences. (Warning: unpleasant side effects and risks)

But taking 8 mg in the middle of the night (as it is recommended everywhere) doesn't work for me.

Videos and presentations from the "Near-term AI safety" mini-conference:

Alexey Turchin:

English presentation: https://drive.google.com/file/d/0B2ka7hIvv96mZHhKc2M0c0dLV3c/view?usp=sharing

Video in Russian: https://www.youtube.com/watch?v=lz4MtxSPdlw&t=2s

Jonathan Yan:

English presentation: https://drive.google.com/file/d/0B2ka7hIvv96mN0FaejVsUWRGQnc/view?usp=sharing

Video in English: https://www.youtube.com/watch?v=QD0P1dSJRxY&t=2s

Sergej Shegurin:

Video in Russian: https://www.youtube.com/watch?v=RNO3pKfPRNE&t=20s

Presenation in Russian: h... (read more)

I would add that values are probably not actually existing objects but just useful ways to describe human behaviour. Thinking that they actually exist is mind projection fallacy.

In the world of facts we have: human actions, human claims about the actions and some electric potentials inside human brains. It is useful to say that a person has some set of values to predict his behaviour or to punish him, but it doesn't mean that anything inside his brain is "values".

If we start to think that values actually exist, we start to have all the problems of finding them, defining them and copying into an AI.

What about a situation when a person says and thinks that he is going to buy a milk, but actually buy milk plus some sweets? And do it often, but do not acknowledge compulsive-obsessive behaviour towards sweets?

0entirelyuseless
They don't have to acknowledge compulsive-obsessive behavior. Obviously they want both milk and sweets, even if they don't notice wanting the sweets. That doesn't prevent other people from noticing it. Also, they may be lying, since they might think that liking sweets is low status.

Also, the question was not if I could judge other's values, but is it possible to prove that AI has the same values as a human being.

Or are you going to prove the equality of two value systems while at least one of them of them remains unknowable?

2Stuart_Armstrong
I'm more looking at "formalising human value-like things, into something acceptable".

May I suggest a test for any such future model? It should take into account that I have unconsciousness sub-personalities which affect my behaviour but I don't know about them.

2Stuart_Armstrong
That is a key feature.

I think you proved that values can't exist outside a human mind, and it is a big problem to the idea of value alignment.

The only solution I see is: don't try to extract values from the human mind, but try to upload a human mind into a computer. In that case, we kill two birds with one stone: we have some form of AI, which has human values (no matter what are they), and it has also common sense.

Upload as AI safety solution also may have difficulties in foom-style self-improving, as its internal structure is messy and incomprehensible for normal human mind... (read more)

0Stuart_Armstrong
We can and do make judgements about rationality and values. Therefore I don't see why AIs need fail at it. I'm starting to get a vague idea how to proceed... Let me work on it for a few more days/weeks, then I'll post it.

I expected it will jump out and start to replicate all over the world.

You could start a local chapter of Transhumanist party, or of anything you want and just make gatherings of people and discuss any futuristic topics, like life extension, AI safety, whatever. Official registration of such activity is probably loss of time and money, except you know what are going to do with it, like getting donations or renting an office.

There is no need to start any institute if you don't have any dedicated group of people around. Institute consisting of one person is something strange.

1fowlertm
That's not a bad idea. As it stands I'm pursuing the goal of building a dedicated group of people around these ideas, which is proving difficult enough as it is. Eventually I'll want to move forward with the institute, though, and it seems wise to begin thinking about that now.

I read in one Russian blog that they calculated the form of objects able to produce such dips. It occurred to be 10 million kilometres strips orbiting the star. I think it is very similar to very large comet tails.

Any attempts for posthumous digital immortality? That is collecting all the data about the person with the hope that the future AI will create his exact model.

5rememberingGrognor
http://grognor.stacky.net/index.php?title=Main_Page Grognor did a good job collecting his own data. I don't have access to his alt twitter account, as it is a private account. But maybe someone else who does can help if the demand arises.

Two my comments got -3 each, so probably only one person with high carma was able to do so.

Thanks for the explanation. Typically I got 70 percent upvoted in LW1, and getting -3 was a signal that I am in a much more aggressive environment, than was LW1.

Anyway, the best downvoting system is on the Longecity forum, where many types of downvotes exist, like "non-informative", "biased" "bad grammar" - but all them are signed, that is they are non-anonymous. If you know who and why downvoted you, you will know how to improve the next post. If you are downvoted without explanation, it feels like a strike in the dark.

I reregistered as avturchin, because after my password was reseted for turchin, it was not clear what I should do next. However, after I reregistered as avturchin, I was not able to return to my original username, - probably because the LW2 prevent several accounts from one person. I prefer to connect to my original name, but don't know how to do, and don't have much time to search how to do it correctly.

0gjm
I suspect the answer, if you want to do it, is to contact an admin. I think the LW2 admins are generally helpful, and it's much easier for them to change things than it is for the old-LW admins.

Agree. The real point of a simulation is to use less computational resources to get approximately the same result as in reality, depending on the goal of the simulation. So it may simulate only surface of the things, like in computer games.

I posted there 3 comments and got 6 downvotes which resulted in extreme negative emotions all the evening that day. While I understand why they were downvoted, my emotional reaction is still a surprise for me.

Because of this, I am not interested to participate in the new site, but I like current LW where downvoting is turned off.

0gjm
It may be worth noting that "6 downvotes" need not mean that 6 people downvoted you. LW2 has "weighted voting" which means that the number of points your upvotes/downvotes change the victim's karma by depends on your own karma level. So maybe you were downvoted twice by weight-3 users, or three times by weight-2 users; in any case, losing 6 points probably corresponds to <6 downvotes.
0gjm
There is a user on LW2 with your username but no recent comments that I can see. Did you do this with a different username? (Feel free not to answer if you would rather keep those identities separate. I'm just curious.)

In fact, I will probably do a reality check, if I am in a dream, if I see something like "all mountains start to move". I refer here to technics to reach lucid dreams that I know and often practice. Humans are unique as they are able to have completely immersive illusions of dreaming, but after all recognise them as dreams without wakening up.

But I got your point: definition of reality depends on the type of reality where one is living.

if I see that mountain start to move, there will be a conflict between what I think they are - geological formations, and my observations, and I have to update my world model. Onу way to do so is to conclude that it is not a real geological mountain, but something which pretended (or was mistakenly observed as) to be a real mountain but after it starts to move, it will become clear that it was just an illusion. Maybe it was a large tree, or a videoprojection on a wall.

0entirelyuseless
Sure. But then you will be relating the pretend mountain, to other mountains, which are still real ones. If all mountains start to move, you will not be able to do that. You will have to say, "Real mountains could not move before, but now they can."

I think there is one observable property of illusions, which become possible exactly because they are competitively cheap. And this is miracles. We constantly see flying mountains in the movies, in dreams, in pictures, but not in reality. If I have a lucid dream, I could recognise the difference between my idea of what is a mountain (a product of long-term geological history) and the fact that it has one peak and in the next second it has two peaks. This could make doubt about it consistency and often help to get lucidity in the dream.

So it is possible to learn about an illusion of something before I get the real one, if there is some unexpected (and computationally cheap) glitches.

0entirelyuseless
"Miracles" doesn't have a sufficiently well defined meaning for this purpose. I think you mean that real things tend to have more stability and permanence, and illusions tend to have less. And I agree: real mountains tend to stay the same, while illusory mountains like ones you are dreaming tend to change rapidly. But this is relative, as I was saying before. There are real mountains, but there also real clouds, and real gusts of wind, even though clouds are less stable and permanent than mountains, and gusts of wind are less stable and permanent than clouds. So if you lived all your life in a dream, the mountains you dreamed would be real. But as I said before, they would be "mountains" with a different meaning; as real things, they would be more like clouds in the real world. Notice that if mountains in the real world suddenly multiplied or changed in a "miraculous" way, I would never conclude that the mountains were not real; I might conclude that there are other principles at work that I did not know about. Including that real mountains might have a relationship to something else that is similar to the relationship of an illusion to something real; but not that the mountains were not real.

So, are the night dreams illusions or real objects? I think that they are illusions: When I see a mountain in my dream, it is an illusion, and my "wet neural net" generates only an image of its surface. However, in the dream, I think that it is real. So dreams are some form of immersive simulations. And as they are computationally cheaper, I see strange things like tsunami more often in dreams than in reality.

0entirelyuseless
I agree. But "they are illusions" only makes sense because they are illusions relative to the ones we see during the day, which are not illusions. In other words, as I said, fake or illusion is relative to real, so it only has meaning when you know about a real one. In other words, if you lived all your life in a night dream and were never awake, the mountains in your dreams would not be illusions. They would be real. That does not mean they would be day mountains -- they would be something different. But when the dreaming you said "this is a mountain," the word "mountain" would refer to a dreamt mountain, not to a day one, since you would have never seen a day one and could not talk about them. So the dreaming you would say, "this is a real mountain," and that would be true. But other awake people would say, "he sees an illusion," and this would also be true. But that is because you and the awake people would be using "mountain" for different things. This is like what I said before about BBs.
turchin100

Happy Petrov day! 34 years ago nuclear war was prevented by a single hero. He died this year. But many people now strive to prevent global catastrophic risks and will remember him forever.

It looks like the word "fake" is not very correct here. Let say illusion. If one creates a movie about volcanic eruption, he has to model only ways it will appear to the expected observer. It is often done in the cinema when they use pure CGI to make a clip as it is cheaper than actually filming real event.

Illusions in most cases are computationally cheaper than real processes and even detailed models. Even if they fild a real actress as it is cheaper than multiplication, the copying of her image creates many illusionary observation of a human, but in fact it is only a TV screen.

Personally, I lost point which you would like to prove. What is the main disagreement?

0entirelyuseless
"What is the main disagreement?" Whether the stuff that generates our experience can reasonably be described in terms that contrast it with real stuff. Illusion has the same problem as "fake." The word is relative: it means something like a real thing, which isn't actually a real thing. But basically real just means the normal stuff, and illusions and fake things mean things which are externally similar. But "the normal stuff" just refers to whatever is normal for us. So all of the stuff that seems normal to us, is real, and is not fake or illusory.

I meant that in a simulation most efforts go to the calculating of only the visible surface of the things. Inside details which are not affecting the visible surface, may be ignored, thus the computation will be computationally much cheaper than atom-precise level simulation. For example, all internal structure of Earth deeper that 100 km (and probably much less) may be ignored to get a very realistic simulation of the observation of a volcanic eruption.

0entirelyuseless
We decide how much structure is needed to count as real by looking at how much structure is actually there. If volcanic eruptions have only 10 miles of structure, then only 10 miles of structure is needed for an eruption to be real. This is perfectly obvious. How much structure is needed for a chair to count as a real chair? You decide that by looking at chairs and figuring out how much structure they actually have. You do not have some a priori idea of how much structure a chair needs, so that you can say that a chair is fake if it doesn't have that structure. You first check how much structure normal chairs have; then if other things look like chairs but don't have that structure, you can say they are fake. In the same way, if normal eruptions have 10 miles of structure, but you find one that has not even 1 mile (e.g. a video), you can say it is fake. But you cannot say the one with 10 miles is fake because it doesn't have 100 miles, when you have never even seen one with 100 miles.

In that case, I use just the same logic as Bostrom: each real civilization creates zillions of copies of some experiences. It already happened in form of dreams, movies and pictures.

Thus I normalize by the number of existing civilization and don't have obscure questions about the nature of the universe or price of the big bang. I just assumed that inside the civilization rare experiences are often faked. They are rare because they are in some way expensive to create, like diamonds or volcanic observation, but their copies are cheap, like glass or pictures.

We could explain it in terms of observations. Fake observation is the situation than you experience something that does not actually exist. For example, you watch a video of a volcanic eruption on youtube. It is computationally cheaper to create a copy a video of volcanic eruption than to actually create a volcano - and because of it, we see pictures about volcanic eruptions more often than actual ones.

It is not meaningless to say that the world is fake, if only observable surfaces of things are calculated like in a computer game, which computationally cheaper.

0entirelyuseless
There can be a fake video of a volcanic eruption, because the video is a picture without the normal physical mechanism that causes such images. In other words, it only has the observable surface without the regular interior. But it is not meaningful to say, "The whole world we know is fake." Because for that to be true, the world has to be missing a regular interior. But the regular interior, say, of a volcanic eruption is the interior that volcanic eruptions normally have in fact, whatever that is; so by definition the interior is there. In other words you need to experience the version you call real in order to call another version fake. It might be that there is more stuff that you do not know about, but calling the world fake is not a good way to say this. Instead, you should just say that there is more stuff in reality than you know about. There is no need to call the stuff you do know fake.

Maybe more correct is to say the price of the observation. It is cheaper to see a volcanic eruption in youtube than in reality.

0Yosarian2
I guess, but it's cheaper to observe the sky in reality then it is on youtube. To observe the sky, you just have to look out the window; turning on your computer costs energy and such. So in order for this to be coherent, I think you have to somehow make the case that our reality is in some extent rare or unlikely or expensive, and I'm not sure how you can do that without knowing more about the creation of the universe then we do, or how "common" the creation of universes is over...some scale (not even sure what scale you would use; over infinite periods of time? Over a multiverse? Does the question even make sense?)

Probably I also said it before, but SA is in fact comparison of prices. And it basically says that cheaper things are more often, and fakes are cheaper than real things. That is why we more often see images of a nuclear blast than real one.

And yes, there are many short simulations in our world, like dreams, thoughts, clips, pictures.

0Yosarian2
It seems weird to place a "price" on something like the Big Bang and the universe. For all we know, in some state of chaos or quantum uncertainty, the odds of something like a Big Bang happening eventually approaches 100%, which makes it basically "free" by some definition of the term. Especially if something like the Big Bang and the universe happens an infinite number of times, either sequentially or simultaneously. Again, we don't know that that's true, but we don't know it's not true either.
1entirelyuseless
The thing is that this requires you to what "fake" and "real" are. In practice those are relative terms that refer to something cheaper and something more expensive in your world. So saying "maybe I'm a Boltzman brain" or "maybe I'm in a simulation" have the problem that you are trying to compare the world you know to a potentially more expensive world and saying "maybe my world is cheaper than it seems." But since you haven't experienced a more expensive version than the real world, you don't even know what that would mean. Of course it is always possible, and even likely, that something is cheaper than it appears (even the real world) but it seems silly to describe that by saying "the real world is a fake world." The words "the real world" refer to the only world you know, even if it is quite likely that that world is cheaper than it seems. In other words, it is likely that the world is cheap; it is meaningless to say the world is fake.

Sounds convincing. I will think about it.

Did you see my map of the simulation argument by the way? http://lesswrong.com/lw/mv0/simulations_map_what_is_the_most_probable_type_of/

0Yosarian2
Yeah, I saw that. In fact looking back on that comment thread, it looks like we had almost the exact same debate there, heh, where I said that I didn't think the simulation hypothesis was impossible but that I didn't see the anthropic argument for it as convincing for several reasons.

I agree that in the simulation one could have fake memories of the past of the simulation. But I don't see a practical reason to run few minutes simulations (unless of a very important event) - fermi-solving simulation must run from the beginning of 20 century and until the civilization ends. Game-simulations also will be probably life-long. Even resurrection-simulations should be also lifelong. So I think that typical simulation length is around one human life. (one exception I could imagine - intense respawning in case of some problematic moment. In that... (read more)

3Yosarian2
The main explanation that I've seen for why an advanced AI might run a lot of simulations is in order to better predict how humans would react in different situations (perhaps to learn to better manipulate humans, or to understand human value system, or maybe to achieve whatever theoretically pro-human goal was set in the AI's utility function, ect). If so, then it likely would run a very large number of very short simulations, designed to put uploaded minds in very specific and very clearly designed unusual situations, and then end the simulation shortly afterwards. Likely if that was the goal it would run a very large number of iterations on the same scenario, each time varying the details ever so slightly, in order to try to find out exactally what makes us tick. For example, instead of philosophizing about the trolley car problem, it might just put a million different humans into that situation and see how each one of them reacts, and then iterate the situation ten thousand times with slight variations each time to see which variables change how humans will react. If an AI does both (both short small-scale simulations and long universe-length simulations), then the number of short simulations would massively outnumber the number of long simulations, you could run quadrillions of them for the same resources as it takes to actually simulate an entire universe.

I am a member of a class of beings, able to think about Doomsday argument, and it is the only correct referent class. And for these class, my day is very typical: I live in advance civilization interested in such things and start to discuss the problem of DA in the morning.

I can't say that I am randomly chosen from hunter-gathers, as they were not able to think about DA. However, I could observe some independent events (if they are independent of my existence) in a random moment of time of their existence and thus predict their duration. It will not help ... (read more)

0Yosarian2
If you're in a simulation, the only reference class that matters is "how long has the simulation been running for". And most likely, for anyone running billions of simulations, the large majority of them are short, only a few minutes or hours. Maybe you could run a simulation that lasts as long as the universe does in subjective time, but most likely there would be far more short simulations. Basically, I don't think you can use the doomsday argument at all if you're in a simulation, unless you know how long the simulation's been running, which you can't know. You can accept either SA or DA, but you can't use both of them at the same time.

It is not a bug, it is a feature :) Quantum mechanics is also very counterintuitive, creates strange paradoxes etc, but it doesn' make it false.

I think that DA and simulation argument are both true, as they support each other. Adding Boltzmann brains is more complicated, but I don't see a problem to be a BB, as there is a way to create a coherent world picture using only BB and path in the space of possible minds, but I would not elaborate here as I can't do it shortly. :)

As I said above, there is no need to tweak reference classes to which I belong, as t... (read more)

1Yosarian2
Sure, and if we had anything like the amount of evidence we have for antropic probability theories that we do for quantum theory I'd be glad to go along with it. But short of a lot of evidence, you should be more skeptical of theories that imply all kinds of improbable results. I don't see that at all. Why not classify yourself as "part of an intelligent species that has nuclear weapons or otherwise poses an existential threat to itself"? That seems like just as reasonable a classification as any (especially if we're talking about "doomsday"), but it gives a very different (worse) result. Or, I donno, "part of an intelligent species that has built an AI capable of winning at Go?" Then we only have a couple more months. ;) It also seems weird to just assume that somehow today is a normal day in human existence, no more or less special then any day any random hunter-gatherer wandered the plains. If you have some a priori reason to think that the present is unusual, you should probably look at that instead of vague anthropic arguments; if you just found out you have cancer and your house is on fire while someone is shooting at you, it probably doesn't make sense to just ignore all that and assume that you're halfway through your lifespan. Or if you were just born 5 minutes ago, and seem to be in a completely different state then anything you've ever experienced. And we're at a very unique point here in the history of our species, right on the verge of various existential threats and at the same time right on the verge of developing spaceflight and the kind of AI technology that would likely ensure our decedents may persist for billions of years. isn't it more useful to look at that instead of just assuming that today is just another day in humanity's life like any other? I mean, it seems likely that we're already waaaaaay out on the probability curve here in one way or another, if the Great Silence of the universe is any guide. There can't have been many intelligent

I don't see the problems with the reference class, as I use the following conjecture: "Each reference class has its own end" and also the idea of "natural reference class" (similar to "the same computational process" in TDT): "I am randomly selected from all, who thinks about Doomsday argument". Natural reference class gives most sad predictions, as the number of people who know about DA is growing from 1983, and it implies the end soon, maybe in couple decades.

Predictive power is probabilistic here and not much dif... (read more)

However, if we look at Doomsday argument and Simulation argument together, they will support each other: most observers will exist in the past simulations of the something like 20-21 century tech civilizations.

It also implies some form of simulation termination soon or - and this is our chance - unification of all observers into just one observer, that is the unification of all minds into one superintelligent mind.

But the question - if most minds in the universe are superintelligences - why I am not superintelligence, still exist :(

I can't easily find the flaw in your logic, but I don't agree with your conclusion because the randomness of my properties could be used for predictions.

For example, I could predict medium human life expectancy based on (supposedly random) my age now. My age is several decades, and human life expectancy is 2 х (several decades) with 50 percent probability (and it is true).

I could suggest many examples, where the randomness of my properties could be used to get predictions, even to measure the size of Earth based on my random distance from the equator. And in all cases that I could check, the DA-style logic works.

I think the opposite: Doomsday argument (in one form of it) is an effective predictor in many common situations, and thus it also could be allied to the duration of human civilization. DA is not absurd: our expectations about human future are absurd.

For example, I could predict medium human life expectancy based on supposedly random my age. My age is several decades, and human life expectancy is 2 х (several decades) with 50 percent probability (and it is true).

3Yosarian2
Let me give a concrete example. If you take seriously the kind of anthropic probabilistic reasoning that leads to the doomsday argument, then it also invalidates the same argument, because we probably aren't living in the real universe at all, we're probably living in a simulation. Except you're probably not living in a simulation because we're probably living in a short period of time of quantum randomness that appears long after the universe ends which recreates you for a fraction of a second through random chance and then takes you apart again. There should be a vast number of those events that happen for every real universe and even a vast number of those events for every simulated universe, so you probably are in one of those quantum events right now and only think that you existed when you started reading this sentence. And that's only a small part of the kind of weirdness these arguments create. You can even get opposite conclusions from one of these arguments just by tweaking exactly what reference class you put things in. For example, "i should be roughly the average human" gives you an entierly different doomsday answer then "i should be roughly the average life form" which gives you an entierly different answer then "I should be roughly the average life form that has some kind of thought process". And there's no clear way to pick a category; some intuitively feel more convincing then others but there's no real way to determine that. Basically, I would take the doomsday argument (and the simulation argument, for that matter) a lot more seriously if anthropic probability arguments of that type didn't lead to a lot of other conclusions that seem much less plausible, or in some cases seem to be just incoherent. Plus, we don't have a good way to deal with what's known as "the measurement problem" if we are trying to use anthropic probability in an infinite multiverse, which throws a further wrench into the gears. A theory which fits most of what we know bu
1Xianda_GAO_duplicate0.5321505782395719
The doomsday argument is controversial not because its conclusion is bleak but because it has some pretty hard to explain implications. Like the choice of reference class is arbitrary but affects the conclusion, it also gives some unreasonable predicting power and backward causations. Anyone trying to understand it would eventually have to reject the argument or find some way to reconcile with these implications. To me neither position are biased as long as it is sufficiently argued.
0entirelyuseless
Exactly. My current age is almost exactly halfway through a normal human lifetime, not a millionth of the way through or 99.9% of the way through.
Load More