Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: turchin 22 September 2017 09:35:20PM 0 points [-]

Sounds convincing. I will think about it.

Did you see my map of the simulation argument by the way? http://lesswrong.com/lw/mv0/simulations_map_what_is_the_most_probable_type_of/

Comment author: Yosarian2 22 September 2017 09:49:59PM *  0 points [-]

Yeah, I saw that. In fact looking back on that comment thread, it looks like we had almost the exact same debate there, heh, where I said that I didn't think the simulation hypothesis was impossible but that I didn't see the anthropic argument for it as convincing for several reasons.

Comment author: turchin 22 September 2017 08:53:09PM *  0 points [-]

I agree that in the simulation one could have fake memories of the past of the simulation. But I don't see a practical reason to run few minutes simulations (unless of a very important event) - fermi-solving simulation must run from the beginning of 20 century and until the civilization ends. Game-simulations also will be probably life-long. Even resurrection-simulations should be also lifelong. So I think that typical simulation length is around one human life. (one exception I could imagine - intense respawning in case of some problematic moment. In that case, there will be many respawnings around possible death event, but consequences of this idea is worrisome)

If we apply DA to the simulation, we should probably count false memories as real memories, because the length of false memories is also random, and there is no actual difference between precalculating false memories and actually running a simulation. However, the termination of the simulation is real.

Comment author: Yosarian2 22 September 2017 09:19:55PM *  1 point [-]

But I don't see a practical reason to run few minutes simulations

The main explanation that I've seen for why an advanced AI might run a lot of simulations is in order to better predict how humans would react in different situations (perhaps to learn to better manipulate humans, or to understand human value system, or maybe to achieve whatever theoretically pro-human goal was set in the AI's utility function, ect). If so, then it likely would run a very large number of very short simulations, designed to put uploaded minds in very specific and very clearly designed unusual situations, and then end the simulation shortly afterwards. Likely if that was the goal it would run a very large number of iterations on the same scenario, each time varying the details ever so slightly, in order to try to find out exactally what makes us tick. For example, instead of philosophizing about the trolley car problem, it might just put a million different humans into that situation and see how each one of them reacts, and then iterate the situation ten thousand times with slight variations each time to see which variables change how humans will react.

If an AI does both (both short small-scale simulations and long universe-length simulations), then the number of short simulations would massively outnumber the number of long simulations, you could run quadrillions of them for the same resources as it takes to actually simulate an entire universe.

Comment author: turchin 19 September 2017 09:29:31AM 1 point [-]

I am a member of a class of beings, able to think about Doomsday argument, and it is the only correct referent class. And for these class, my day is very typical: I live in advance civilization interested in such things and start to discuss the problem of DA in the morning.

I can't say that I am randomly chosen from hunter-gathers, as they were not able to think about DA. However, I could observe some independent events (if they are independent of my existence) in a random moment of time of their existence and thus predict their duration. It will not help to predict the duration of existence of hunter-gathers, as it is not truly independent of my existence. But could help in other cases.

20 minutes ago I participate in shooting in my house - but it was just a night dream, and it supports simulation argument, which basically claims that most events I observe are unreal, as their simulation is cheaper. I participate during my life in hundreds shooting in dreams, games and movies, but never in real one: simulated events are much more often.

Thus DA and SA are not too bizarre, they become bizarre because of incorrect solving of the reference class problem.

The strangeness of DA appears when we try to compare it with some unrealistic expectations about our future: that there will be billion of years full of billion people living in human-like civilization. But more probable is that in several decades AI will appear, which will run many past simulations (and probably kill most humans). It is exactly what we could expect from observed technological progress, and DA and SA just confirm observed trends.

Comment author: Yosarian2 22 September 2017 01:38:51PM 0 points [-]

If you're in a simulation, the only reference class that matters is "how long has the simulation been running for". And most likely, for anyone running billions of simulations, the large majority of them are short, only a few minutes or hours. Maybe you could run a simulation that lasts as long as the universe does in subjective time, but most likely there would be far more short simulations.

Basically, I don't think you can use the doomsday argument at all if you're in a simulation, unless you know how long the simulation's been running, which you can't know. You can accept either SA or DA, but you can't use both of them at the same time.

Comment author: Dagon 19 September 2017 09:19:13PM 0 points [-]

I fail the idealogical Turing test for whoever authored this (presuming it's not intended as a joke), and I'm far enough from being able to model it logically that I'm OK being in the "that's silly" camp.

It's not just that I don't agree, I can't even figure out what the author wants me to do differently tomorrow than I did yesterday, and when do guess at some phrases like "humans before business" and "We must question our intent and listen to our hearts" I have trouble believing anyone sane actually wants that.

The specific silliness of "humans before business" is pretty straightforward: business is something humans do, and "humans before this thing that humans do" is meaningless or tautological. Business doesn't exist without humans, right?

Comment author: Yosarian2 20 September 2017 08:54:47AM 2 points [-]

The specific silliness of "humans before business" is pretty straightforward: business is something humans do, and "humans before this thing that humans do" is meaningless or tautological. Business doesn't exist without humans, right?

Eh, it's not as absurd as that. You know how we worry that AI's might optimize something easily quantifiable, but in a way that destroys human value? I think it's entierly reasonable to think that businesses may do the same thing, and optimize for their own profit in a way that destroys human value in general. For example, the way Facebook is to a significant extent designed to maximize getting clicks and eyeballs in manipulative ways that do not actually serve human communication needs for the users.

Comment author: Viliam 19 September 2017 11:17:14PM 0 points [-]

the sheer quantity of new content is extremely low

That depends on how much time you actually want to spend reading LW. I mean, the optimal quantity will be different for a person who reads LW two hours a day, or a person who reads LW two hours a week. Now the question is which one of these should we optimize LW for? The former seems more loyal, but the latter is probably more instrumentally rational if we agree that people should be doing things besides reading web. (Also, these days LW competes for time with SSC and others.)

Comment author: Yosarian2 19 September 2017 11:28:55PM 1 point [-]

Ideally, you would want to generate enough content for the person who wants to read LW two hours a day, an then promote or highlight the best 5%-10% of the content so someone who has only two hours a week can see it.

Everyone is much better off that way. The person with only two hours a week is getting much better content then if there was much less content to begin with.

Comment author: turchin 18 September 2017 09:28:49PM *  0 points [-]

It is not a bug, it is a feature :) Quantum mechanics is also very counterintuitive, creates strange paradoxes etc, but it doesn' make it false.

I think that DA and simulation argument are both true, as they support each other. Adding Boltzmann brains is more complicated, but I don't see a problem to be a BB, as there is a way to create a coherent world picture using only BB and path in the space of possible minds, but I would not elaborate here as I can't do it shortly. :)

As I said above, there is no need to tweak reference classes to which I belong, as there is only one natural class. However, if we take different classes, we get a prediction for different events: for example, class of humans will extinct soon, but the class of animals could exist for billion more years, and it is quite a possible outcome: humans will extinct, but animals survive. There is nothing mysterious in reference classes, just different answers for different questions.

The measure is the real problem, I think so.

The theory of DA is testable if we apply it to many smaller examples like Gott successfully did for predicting the length of the Broadway shows.

So the theory is testable, no more weird than other theories we use, and there is no contradiction between doomsday argument and simulation argument (they both mean that there are many past simulations which will be turned off soon). However, it still could be false or have one more turn, which will make things even weirder, like if we try to account for mathematically possible observers or multilevel simulations or Boltzmann AIs.

Comment author: Yosarian2 19 September 2017 02:00:04AM *  1 point [-]

Quantum mechanics is also very counterintuitive, creates strange paradoxes etc, but it doesn' make it false.

Sure, and if we had anything like the amount of evidence we have for antropic probability theories that we do for quantum theory I'd be glad to go along with it. But short of a lot of evidence, you should be more skeptical of theories that imply all kinds of improbable results.

As I said above, there is no need to tweak reference classes to which I belong, as there is only one natural class.

I don't see that at all. Why not classify yourself as "part of an intelligent species that has nuclear weapons or otherwise poses an existential threat to itself"? That seems like just as reasonable a classification as any (especially if we're talking about "doomsday"), but it gives a very different (worse) result. Or, I donno, "part of an intelligent species that has built an AI capable of winning at Go?" Then we only have a couple more months. ;)

It also seems weird to just assume that somehow today is a normal day in human existence, no more or less special then any day any random hunter-gatherer wandered the plains. If you have some a priori reason to think that the present is unusual, you should probably look at that instead of vague anthropic arguments; if you just found out you have cancer and your house is on fire while someone is shooting at you, it probably doesn't make sense to just ignore all that and assume that you're halfway through your lifespan. Or if you were just born 5 minutes ago, and seem to be in a completely different state then anything you've ever experienced. And we're at a very unique point here in the history of our species, right on the verge of various existential threats and at the same time right on the verge of developing spaceflight and the kind of AI technology that would likely ensure our decedents may persist for billions of years. isn't it more useful to look at that instead of just assuming that today is just another day in humanity's life like any other?

I mean, it seems likely that we're already waaaaaay out on the probability curve here in one way or another, if the Great Silence of the universe is any guide. There can't have been many intelligent species who got to where we are in the history of our galaxy, or I think the galaxy would look very different.

Comment author: turchin 18 September 2017 12:42:59PM *  1 point [-]

I think the opposite: Doomsday argument (in one form of it) is an effective predictor in many common situations, and thus it also could be allied to the duration of human civilization. DA is not absurd: our expectations about human future are absurd.

For example, I could predict medium human life expectancy based on supposedly random my age. My age is several decades, and human life expectancy is 2 х (several decades) with 50 percent probability (and it is true).

Comment author: Yosarian2 18 September 2017 08:57:32PM 2 points [-]

Let me give a concrete example.

If you take seriously the kind of anthropic probabilistic reasoning that leads to the doomsday argument, then it also invalidates the same argument, because we probably aren't living in the real universe at all, we're probably living in a simulation. Except you're probably not living in a simulation because we're probably living in a short period of time of quantum randomness that appears long after the universe ends which recreates you for a fraction of a second through random chance and then takes you apart again. There should be a vast number of those events that happen for every real universe and even a vast number of those events for every simulated universe, so you probably are in one of those quantum events right now and only think that you existed when you started reading this sentence.

And that's only a small part of the kind of weirdness these arguments create. You can even get opposite conclusions from one of these arguments just by tweaking exactly what reference class you put things in. For example, "i should be roughly the average human" gives you an entierly different doomsday answer then "i should be roughly the average life form" which gives you an entierly different answer then "I should be roughly the average life form that has some kind of thought process". And there's no clear way to pick a category; some intuitively feel more convincing then others but there's no real way to determine that.

Basically, I would take the doomsday argument (and the simulation argument, for that matter) a lot more seriously if anthropic probability arguments of that type didn't lead to a lot of other conclusions that seem much less plausible, or in some cases seem to be just incoherent. Plus, we don't have a good way to deal with what's known as "the measurement problem" if we are trying to use anthropic probability in an infinite multiverse, which throws a further wrench into the gears.

A theory which fits most of what we know but gives one or a few weird results that we can test is interesting. A theory that gives a whole mess of weird and often conflicting results, many of which would make the scientific method itself a meaningless joke if true, and almost none of which are testable, is probably flawed somewhere, even if it's not clear to us quite where.

Comment author: entirelyuseless 17 September 2017 07:23:50PM 0 points [-]

"I think that most discussions about Doomsday argument are biased in the way that author tries to disprove it."

This article is a good example: talking about "solutions" to an argument implies that you started out from the beginning with the desire to prove it was false, without first considering whether it was likely to be true or not.

Comment author: Yosarian2 17 September 2017 08:14:24PM 0 points [-]

I think the argument probably is false, because arguments of the same type can be used to "prove" a lot of other things that also clearly seem to be false. When you take that kind of anthropomorphic reasoning and take it to it's natural conclusion, you reach a lot of really bizzare places that don't seem to make sense.

In math, it's common for a proof to be disputed by demonstrating that the same form of proof can be used to show something that seems to be clearly false, even if you can't find the exact step where the proof went wrong, and I think the same is true about the doomsday argument.

Comment author: Habryka 16 September 2017 11:56:18PM 3 points [-]

Agree with this.

I do however think that we actually have a really large stream of high-quality-content already in the broader rationality diaspora that we just need to tap into and get onto the new page. As such, the problem is a bit easier than getting a ton of new content creators, and is instead more of a problem of building something that the current content creators want to move towards.

And as soon as we have a high-quality stream of new content I think it will be easier to attract new writers who will be looking to expand their audience.

Comment author: Yosarian2 17 September 2017 01:48:05AM *  3 points [-]

Maybe; there certanly are a lot of good rationalist bloggers who have at least at some point been interested in LessWrong. I don't think bloggers will come back though unless the site first becomes more active then it currently is. (They may give it a chance after the Beta is rolled out, but if activity doesn't increase quickly they'll leave again.) Activity and an active community is necessary to keep a project like this going. Without an active community here there's no point in coming back here instead of posting on your own blog.

I guess my concern here though is that right now, LessWrong has a "discussion" side which is a little active and a "main" side which is totally dead. And it sounds like this plan would basically get rid of the discussion side, and make it harder to post on the main side. Won't the most likely outcome just be to lower the amount of content and the activity level even more, maybe to zero?

Fundamentally, I think the premise of your second bottleneck is incorrect. We don't really have a problem with signal-to-noise ratio here, most of the posts that do get posted here are pretty good, and the few that aren't don't get upvoted and most people ignore them without a problem. We have a problem with low total activity, which is almost the exact opposite problem.

Comment author: Yosarian2 16 September 2017 09:00:26PM *  3 points [-]

My concern around the writing portion of your idea is this: from my point of view, the biggest problem with lesswrong is that the sheer quantity of new content is extremely low. In order for a LessWrong 2.0 to succeed, you absolutly have to get more people spending the time and effort to create great content. Anything you do to make it harder for people to contribute new content will make that problem worse. Especially anything that creates a barrier for new people who want to post something in discussion. People will not want to write content that nobody might see unless it happens to get promoted.

Once you get a constant stream of content on a daily basis, then maybe you can find a way to curate it to highlight the best content. But you need that stream of content and engagement first and foremost or I worry the whole thing may be stillborn.

View more: Next