Many folk here on LW take the simulation argument (in its more general forms) seriously. Many others take Singularitarianism1 seriously. Still others take Tegmark cosmology (and related big universe hypotheses) seriously. But then I see them proceed to self-describe as atheist (instead of omnitheist, theist, deist, having a predictive distribution over states of religious belief, et cetera), and many tend to be overtly dismissive of theism. Is this signalling cultural affiliation, an attempt to communicate a point estimate, or what?

I am especially confused that the theism/atheism debate is considered a closed question on Less Wrong. Eliezer's reformulations of the Problem of Evil in terms of Fun Theory provided a fresh look at theodicy, but I do not find those arguments conclusive. A look at Luke Muehlhauser's blog surprised me; the arguments against theism are just not nearly as convincing as I'd been brought up to believe2, nor nearly convincing enough to cause what I saw as massive overconfidence on the part of most atheists, aspiring rationalists or no.

It may be that theism is in the class of hypotheses that we have yet to develop a strong enough practice of rationality to handle, even if the hypothesis has non-negligible probability given our best understanding of the evidence. We are becoming adept at wielding Occam's razor, but it may be that we are still too foolhardy to wield Solomonoff's lightsaber Tegmark's Black Blade of Disaster without chopping off our own arm. The literature on cognitive biases gives us every reason to believe we are poorly equipped to reason about infinite cosmology, decision theory, the motives of superintelligences, or our place in the universe.

Due to these considerations, it is unclear if we should go ahead doing the equivalent of philosoraptorizing amidst these poorly asked questions so far outside the realm of science. This is not the sort of domain where one should tread if one is feeling insecure in one's sanity, and it is possible that no one should tread here. Human philosophers are probably not as good at philosophy as hypothetical Friendly AI philosophers (though we've seen in the cases of decision theory and utility functions that not everything can be left for the AI to solve). I don't want to stress your epistemology too much, since it's not like your immortal soul3 matters very much. Does it?

Added: By theism I do not mean the hypothesis that Jehovah created the universe. (Well, mostly.) I am talking about the possibility of agenty processes in general creating this universe, as opposed to impersonal math-like processes like cosmological natural selection.

Added: The answer to the question raised by the post is "Yes, theism is wrong, and we don't have good words for the thing that looks a lot like theism but has less unfortunate connotations, but we do know that calling it theism would be stupid." As to whether this universe gets most of its reality fluid from agenty creators... perhaps we will come back to that argument on a day with less distracting terminology on the table.

 


 

1 Of either the 'AI-go-FOOM' or 'someday we'll be able to do lots of brain emulations' variety.

2 I was never a theist, and only recently began to question some old assumptions about the likelihood of various Creators. This perhaps either lends credibility to my interest, or lends credibility to the idea that I'm insane.

Or the set of things that would have been translated to Archimedes by the Chronophone as the equivalent of an immortal soul (id est, whatever concept ends up being actually significant).

New to LessWrong?

New Comment
538 comments, sorted by Click to highlight new comments since: Today at 1:51 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

"Gods are ontologically distinct from creatures, or they're not worth the paper they're written on." -- Damien Broderick

If you believe in a Matrix or in the Simulation Hypothesis, you believe in powerful aliens, not deities. Next!

There's also no hint of worship which everyone else on the planet thinks is a key part of the definition of a religion; if you believe that Cthulhu exists but not Jehovah, and you hate and fear Cthulhu and don't engage in any Elder Rituals, you may be superstitious but you're not yet religious.

This is mere distortion of both the common informal use and advanced formal definitions of the word "atheism", which is not only unhelpful but such a common religious tactic that you should not be surprised to be downvoted.

Also http://www.smbc-comics.com/index.php?db=comics&id=1817

A Simulator would be ontologically distinct from creatures like us-- for any definition of ontologically distinct I can imagine wanting use. The Simulation Hypothesis is a metaphysical hypothesis in the most literal sense- it's a hypothesis about what our physical universe really is, beyond the wave function.

Yeah, Will's theism in this post isn't the theism of believers, priests or academic theologians. And with certain audiences confusion would likely result and so this language should be avoided with those audiences. But I think we're somewhat more sophisticated than that- and if there are reasons to use theistic vocabulary then I don't see why we shouldn't. I'm assuming Will has these reasons, of course.

Keep in mind, the divine hasn't always been supernatural. Greek gods were part of natural explanations of phenomena, Aristotle's god was just there to provide a causal stopping place, Hobbes's god was physical, etc. We don't have to cow-tow to the usage of present religious authorities. God has always been a flexible word, there is no particular reason to take modern science to be falsifying God instead of telling us what a god, if one exists, must be like.

I feel like we lose o... (read more)

0jacob_cannell13y
I wish this viewpoint were more common, but judging from the OP's score, it is still in minority. I just picked up Sam Harris's latest book - the Moral Landscape, which is all about the idea that it is high time science invaded religion's turf and claimed objective morality as a scientific inquiry. Perhaps the time is also come when science reclaims theism and the related set of questions and cosmologies. The future (or perhaps even the present) is rather clearly a place where there are super-powerful beings that create beings like us and generally have total control over their created realities. It's time we discussed this rationally.
2Dreaded_Anomaly13y
Sam Harris is misguided at best in the major conclusions he draws about objective morality. See this blog post by Sean Carroll, which links to his previous posts on the subject. My views on "reclaiming" theism are summed up by ata's previous comment:
1Furcas13y
Have you read Less Wrong's metaethics sequence? It and The Moral Landscape reach pretty much the same conclusions, except about the true nature of terminal values, which is a major conclusion, but only one among many. Sean Carroll, on the other hand, gets absolutely everything wrong.
5Dreaded_Anomaly13y
Given that the full title of the book is "The Moral Landscape: How Science Can Determine Human Values," I think that conclusion is the major one, and certainly the controversial one. "Science can help us judge things that involve facts" and similar ideas aren't really news to anyone who understands science. Values aren't a certain kind of fact. I don't see where Sean's conclusions are functionally different from those in the metaethics sequence. They're presented in a much less philosophically rigorous form, because Sean is a physicist, not a philosopher (and so am I). For example, this statement of Sean's: and this one of Eliezer's: seem to express the same sentiment, to me. If you really object to Sean's writing, take a look at Russell Blackford's review of the book. (He is a philosopher, and a transhumanist one at that.)
2Furcas13y
To be accurate Harris should have inserted the word "Instrumental" before "Values" in his book's title, and left out the paragraphs where he argues that the well-being of conscious minds is the basis of morality for reasons other than that the well-being of conscious minds is the basis of morality. There would still be at least two thirds of the book left, and there would still be a huge amount of people who would find it controversial, and I'm not just talking about religious fundamentalists. The difference is huge. Eliezer and I do believe that our 'convictions' have the same status as objective laws of nature (although we assign lower probability to some of them, obviously).
0Dreaded_Anomaly13y
I wouldn't limit "people who don't understand science" to "religious fundamentalists," so I don't think we really disagree. A huge amount of people find evolution to be controversial, too, but I wouldn't give much credence to that "controversy" in a serious discussion. The quantum numbers which an electron possesses are the same whether you're a human or a Pebblesorter. There's an objectively right answer, and therefore objectively wrong answers. Convictions/terminal values cannot be compared in that way. I understand what Eliezer means when he says: but he later says That's what the difference is, to me. An electron would have its quantum numbers whether or not humanity existed to discover them. 2 + 2 = 4 is true whether or not humanity is around to think it. Terminal values are higher level, less fundamental in terms of nature, because humanity (or other intelligent life) has to exist in order for them to exist. We can find what's morally right based on terminal values, but we can't find terminal values that are objectively right in that they exist whether or not we do.
3wnoise13y
Careful. The quantum numbers are no more than a basis for describing an electron. I can describe a stick as spanning a distance 3 meters wide and 4 long, while a pebblesorter describes it as being 5 meters long and 0 wide, and we can both be right. The same thing can happen when describing a quantum object.
0Dreaded_Anomaly13y
Yes, I should have been more careful with my language. Thanks for pointing it out. Edited.
2Furcas13y
Okay, let me make my claim stronger then: A huge amount of people who understand science would find the truncated version of TML described above controversial: A big fraction of the people who usually call themselves moral nihilists or moral relativists. I'm saying that there is an objectively right answer, that terminal values can be compared (in a way that is tautological in this case, but that is fundamentally the only way we can determine the truth of anything). See this comment. Do you believe it is true that "For every natural number x, x = x"? Yes? Why do you believe that? Well, you believe it because for every natural number x, x = x. How do you compare this axiom to "For every natural number x, x != x"? Anyway, at least one of us is misunderstanding the metaethics sequence, so this exchange is rather pointless unless we want to get into a really complex conversation about a sequence of posts that has to total at least 100,000 words, and I don't want to. Sorry.
0byrnema13y
In quick approximation, what was this conclusion?
4Furcas13y
That terminal values are like axioms, not like theorems. That is, they're the things without which you cannot actually ask the question, "Is this true?" You can say or write the words "Is", "this", and "true" without having axioms related to that question somewhere in your mind, of course, but you can't mean anything coherent by the sentence. Someone who asks, "Why terminal value A rather than terminal value B?" and expects (or gives) an answer other than "Because of terminal value A, obviously!"* is confused. *That's assuming that A really is a terminal value of the person's moral system. It could be an instrumental value; people have been known to hold false beliefs about their own minds.
0jacob_cannell13y
I just started reading it and picked it really because I needed something for the train in a hurry. In part I read the likes of Harris just to get a better understanding of what makes a popular book. As far as I've read into Harris's thesis about objective morality, I see it as rather hopeless; depending ultimately on the notion of a timeless universal human brain architecture which is mythical even today, posthuman future aside. Carroll's point at the end about attempting to find the 'objective truth' about what is the best flavor of ice cream echoes my thoughts so far on the "Moral Landscape". The interesting part wasn't his theory, it was the idea that the entire belief space currently held by religion is now up for grabs. In regards to ata's previous comment, I don't agree at all. Theism is not some single atomic belief. It is an entire region in belief space. You can pull out many of the sub-beliefs and reduce them to atomic binary questions which slice idea-space, such as: Was this observable universe created by a superintelligence? Those in the science camp used to be pretty sure the answer to that was no, but it turns out they may very well be wrong, and the theists may have guessed correctly all along (Simulation Argument). Did superintelligences intervene in earth's history? How do they view us from a moral/ethical standpoint? And so on . . . These questions all have definitive answers, and with enough intelligence/knowledge/computation they are all probably answerable. You can say "theism/God" were silly mistakes, but how do you rationalize that when we now know that true godlike entities are the likely evolutionary outcome of technological civilizations and common throughout the multiverse?
1Dreaded_Anomaly13y
I try not to rationalize. I don't think we should reward correct guesses that were made for the wrong reasons (and are only correct by certain stretches of vocabulary). Talking about superintelligences is more precise and avoids vast planes of ambiguity and negative connotations, so why not just do that?
0jacob_cannell13y
I don't think it is any stretch of vocabulary to use the word 'god' to describe future superintelligences. If the belief is correct, it can't also be a silly mistake. The entire idea that one must choose words carefully to avoid 'vast planes of ambiguity and negative connotations' is at the heart of the 'theism as taboo' problem. The SA so far stands to show that the central belief of broad theism is basically correct. Let's not split hairs on that and just admit it. If that is true however then an entire set of associated and dependent beliefs may also be correct, and a massive probability update is in order. Avoiding the 'negative connotations' to me suggests this flawed process of consciously or sub-consciously distancing any possible mental interpretation of the Singularity and the SA such that it is similar to theistic beliefs. I suspect most people tend to do this because of belief inertia, the true difficulty of updating, and social signaling issues arising from being associated with a category of people who believe in the wrong versions of a right idea for insufficient reasons.
1Dreaded_Anomaly13y
"The universe was created by an intelligence" is the central belief of deism, not theism. Whether or not the intelligence would interact with the universe, for what reasons, and to what ends, are open questions. Also, at this point I'm more inclined to accept Tegmark's mathematical universe description than the simulation argument. That seems oxymoronic to me. There are superficial similarities between the simulation argument and theism, but, for example, the idea of worship/deference in the latter is a major element that the former lacks. The important question is: will using theistic terminology help with clarity and understanding for the simulation argument? The answer does not appear to be yes.
0jacob_cannell13y
You're right, I completely agree with the above in terms of the theism/deism distinction. The SA supports deism while allowing for theism but leaving it as an open question. My term "broad theism" meant to include theism & deism. Perhaps that category already has a term, not quite sure. I find the SA has much stronger support - Tegmark requires the additional belief that other physical universes exist for which we can never possibly find evidence for against. Some fraction of simulations probably have creators who desire some form of worship/deference, the SA turns this into a question of frequency or probability. I of course expect that worship-desiring creators are highly unlikely. Regardless, worship is not a defining characteristic of theism. I see it as the other way around. The SA gives us a reasonable structure within which to (re)-evaluate theism.
0Dreaded_Anomaly13y
How could we find evidence of the universe simulating our own, if we are in a simulation? They're both logical arguments, not empirical ones. I really don't see what is so desirable about theism that we ought to define it to line up near-perfectly with the simulation argument in order to use it and related terminology. Any rhetorical scaffolding for dealing with Creators that theists have built up over the centuries is dripping with the negative connotations I referenced earlier. What net advantage do we gain by using it?
0jacob_cannell13y
If say in 2080 we have created a number of high-fidelity historical recreations of 2010 with billions of sentient virtual humans who which is nearly indistinguishable (from their perspective) to our original 2010, then much of the uncertainty in the argument is eliminated. (some uncertainty always remains, of course) The other distinct possibility is that our simulation reaches some endpoint and possible re-integration, at which point it would be obvious.
0[anonymous]13y
tl;dr - If you're going to equate morality with taste, understand that when we measure either of the two, taking agents into the process is a huge fact we can't leave out I'll be upfront about having not read Sam Harris' book yet, though I did read the blog review to get a general idea. Nonetheless, I take issue with the following point: I've found that an objective truth about the best flavor of ice cream can be found if one figures out which disguised query they're after. (Am I looking for "If I had to guess, what would random person z's favorite flavor of ice cream be, with no other information?" or am I looking for something else). This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation. When I want to know what flavor of ice cream is best, I take into account people's preferences. If I want to know what would be the most moral action, I need to take into account it's effects on people (or myself, should I be a virtue ethicist, or how it aligns with my rules, should I be a deontologist). Admittedly the latter is tougher than the former, but that doesn't mean we have no hoped of dealing with it objectively. It just means we have to do the best we can with what we're given, which may mean a lot of individual subjectivity. In his book Stumbling on Happiness, Daniel Gilbert writes about studying the subjective as objectively as possible when he decides on the three premises for understanding happiness: 1] Using imperfect tools sucks, but it's better than no tools. 2] An honest, real-time insider view is going to be more accurate than our current best outside views. 3] Abuse the law of real numbers to get around the imperfections of 1] and 2] (a.k.a measure often)
1jacob_cannell13y
I perhaps should have elaborated more, or think through my objection to Harris more clearly, but in essence I believe the problem is not that of finding an objective morality given people's preferences, it's objectively determining what people's preferences should be. There is an objective best ice cream flavor given a certain person's mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor? My attempt at a universal objective morality might take some maximization of value given our current preferences and then evolve it into the future, maximizing over some time window. Perhaps you need to extend that time window to the very end. This would lead to some form of cosmism - directing everything towards some very long term universal goal.
1[anonymous]13y
This post was clearer than your original, and I think we agree more here than we did before, which may partially be an issue of communication styles/methods/etc. This I agree with, but it's more for the gut response of "I don't trust people to determine other people's values." I wonder if the latter could be handled objectively, but I'm not sure I'd trust humans to do it. My reflex response to this question was "No" followed by "Wait, wouldn't I weight humans minds much more significantly than raccoons if I was figuring out human preferences?" Which I then thought through and latched on "Agents still matter; if I'm trying to model "best ice cream flavor to humans", I give the rough category of "human-minds" more weight than other minds. Heck, I hardly have a reason to include such minds, and instrumentally they will likely be detrimental. So in that particular generalization, we disagree, but I'm getting the feeling we agree here more than I had guessed.
-2jacob_cannell13y
We already have to deal with this when we raise children. Western societies generally favor granting individuals great leeway in modifying their preferences and shaping the preferences of their children. We also place much less value on the children's immediate preferences. But even this freedom is not absolute.
1Jack13y
Hard to say, my sense is those of us endorsing/sympathizing/tolerant of Will's position were pretty persuasive in this thread. The OP's score went up from where it was when I first read the post. I'm in complete agreement with Dreaded_Anomaly on this. Harris is excellent on the neurobiology of religion, as an anti-apologist and as a commentator on the status of atheism as a public force. But he is way out of his depths as a moral philosopher. Carroll's reaction is pretty much dead on. Even by the standards of the ethical realists Harris's arguments just aren't any good. As philosophy, they'd be unlikely to meet the standards for publication. Now, once you accept certain controversial things about morality then much of what Harris says does follow. And from what I've seen Harris says some interesting things on that score. But it's hard to get excited when the thesis the book got publicized with is so flawed.

You seem to be dictating that theist beliefs and simulationist beliefs should not be collected together into the same reference class. (The reason for this dictat seems to be that you disrespect the one and are intrigued by the other - but never mind that.)

However, this does not seem to address the point which I think the OP was making. Which seems to be that arguments for (against) theism and arguments for (against) simulationism should be collected together in the same reference class. That if we do so, we discover that many of the counter-arguments that we advance against theist apologetics are (objectively speaking) equally effective against simulationist speculation. Yet (subjectively speaking) we don't feel they have the same force.

Contempt for those with whom you disagree is one of the most dangerous traps facing an aspiring rationalist. I think that it would be a very good idea if the OP were to produce that posting on charity-in-interpretation which he mentioned.

Next!

we discover that many of the counter-arguments that we advance against theist apologetics are (objectively speaking) equally effective against simulationist speculation

I've argued rather extensively against religion on this website. Name a single one of those arguments which is equally effective against simulationism.

I've argued rather extensively against religion on this website.

That was my impression as well, but when I went looking for those arguments, they were very difficult to find. Perhaps my Google-fu is weak. Help from LW readers is welcome.

I found plenty of places where you spoke disrespectfully about religion, and quite a few places where you cast theists as the villains in your negative examples of rationality (a few arguably straw-men, but mostly fair). But I was surprised that I found very few places where you were actually arguing against religion.

Name a single one of those arguments which is equally effective against simulationism.

Well, the only really clear-cut example of a posting-length argument against religion is based on the "argument from evil". As such, it is clearly not equally effective against simulationism.

You did make a posting attempting to define the term "supernatural" in a way that struck me as a kind of special pleading tailored to exclude simulationism from the criticism that theism receives as a result of that definition.

This posting rejects the supernatural by defining it as 'a belief in an explanatory entity which is fundament... (read more)

Help from LW readers is welcome.

I'll chime in that Eliezer provided me with the single, most personally powerful argument that I have against religion. (I'm not as convinced by razor and low-prior arguments, perhaps because I don't understand them.)

The argument not only pummels religion it identifies it: religion is the pattern matching that results when you feel around for the best (most satisfying) answer. To paraphrase Eliezer's argument (if someone knows the post, I'll link to it, there's at least this); while you're in the process of inventing things, there's nothing preventing you from making your theory as grand as you want. Once you have your maybe-they're-believing-this-because-that-would-be-a-cool-thing-to-believe lenses on, it all seems very transparent. Especially the vigorous head-nodding in the congregation.

I don't have so much against pattern matching. I think it has it's uses, and religion provides many of them (to feel connected and integrated and purposeful, etc). But it's an absurd means of epistemology. I think it's amazing that religions go from 'whoever made us must love us and want us to love the world' --which is a very natural pattern for humans to m... (read more)

6Perplexed13y
I also would like to see a link to that post, if anyone recognizes it. I'll agree that to (atheist) me, it certainly seems that one big support for religious belief is the natural human tendency toward wishful thinking. However, it doesn't do much good to provide convincing arguments against religion as atheists picture it. You need convincing arguments against religion as its practitioners see it. Yeah, I know what you mean. Pity I can't turn that around and use it against simulationism. :)
1byrnema13y
I found it: this is the post I meant. But it wasn't written by Eliezer, sorry. (The comment I linked to in the grandparent that was resonates with this idea for me, and I might have seen more resonance in older posts.) I'm confused. I just want to understand religion, and the world in general, better. Are you interested in deconversion? Ha ha. Simulationism is of course a way cool idea. I think the compelling meme behind it though is that we're being tricked or fooled by something playful. When you deviate from this pattern, the idea is less culturally compelling. In particular, the word 'simulation' doesn't convey much. If you just mean something that evolves according to rules, then our universe is apparently a simulation already anyway.
1Perplexed13y
Thx. That is a good posting. As was the posting to which it responded Whoops! Bad assumption on my part. Sorry. No, I am not particularly interested in turning theists into atheists either, though I am interested in rational persuasion techniques more generally.
3timtyler13y
Dennett tells a similar "agentification" story:
0timtyler13y
I think that is usually called Patternicity these days. See:
3byrnema13y
Seeing patterns in noise and agency in patterns (especially fate) is probably a large factor in religious belief. But what I was referring to by pattern matching was something different. Our cultural ideas about the world make lots of patterns, and there are natural ways to complete these patterns. When you hear the completion of these patterns, it can feel very correct, like something you already knew, or especially profound if it pulls together lots of memes. For example, the Matrix is an idea that resonates with our culture. Everyone believes it on some level, or can relate to the world being like that. The movie was popular but the meme wasn't the result of the movie -- the meme was already there and the movie made it explicit and gave the idea a convenient handle. Human psychology plays a role. The Matrix as a concept has probably always been found in stories as a weak collective meme, but modern technology brought it more immediately and uniformly in our collective awareness. I think religion is like that. A story that wrote itself from all the loose ends of what we already believe. Religious leaders are good at feeling and completing these collective patterns. Religion is probably in trouble because many of the memes are so anachronistic now. They survive to the extent that the ideas are based on psychology but the other stuff creates dissonance. This isn't something to reference (I'm sure there are zillions of books developing this) or a personal theory, it's more or less a typical view about religion. It explains why there are so many religions differing in details (different things sounded good to different people) but with common threads. (Because the religions evolved together with overlapping cultures and reflect our common psychology.)
9Eliezer Yudkowsky13y
In lieu of an extended digression about how to adjust Solomonoff induction for making anthropic predictions, I'll simply note that having God create the world 5,000 years ago but fake the details of evolution is more burdensome than having a simulator approximate all of physics to an indistinguishable level of detail. Why? Because "God" is more burdensome than "simulator", God is antireductionist and "simulator" is not, and faking the details of evolution in particular in order to save a hypothesis invented by illiterate shepherds is a more complex specification in the theory than "the laws of physics in general are being approximated". To me it seems nakedly obvious that "God faked the details of evolution" is a far more outre and improbable theory than "our universe is a simulation and the simulation is approximate". I should've been able to leave filling in the details as an exercise to the reader.

Extended digression about how to adjust Solomonoff induction for making anthropic predictions plz

6Will_Newsome13y
This just means you have a very narrow (Abrahamic) conception of God that not even most Christians have. (At least, most Christians I talk to have super-fuzzy-abstract ideas about Him, and most Jews think of God as ineffable and not personal these days AFAIK.) Otherwise your distinction makes little sense. (This may very well be an argument against ever using the word 'God' without additional modifiers (liberal Christian, fundamentalist Christian, Orthodox Jewish, deistic, alien, et cetera), but it's not an argument that what people sometimes mean by 'God' is a wrong idea. Saying 'simulator' is just appealing to an audience interested in a different literary genre. Turing equivalence, man!) Of note is that the less memetically viral religions tend to be saner (because missionary religions mostly appealed to the lowest common denominator of epistemic satisfiability). Buddhism as Buddha taught it is just flat out correct about nearly everything (even if you disagree with his perhaps-not-Good but also not-Superhappy goal of eliminating imperfection/suffering/off-kilteredness). Many Hindu and Jain philosophers were good rationalists (in the sense that Epicurus was a good rationalist), for instance. To a first and third and fifth approximation, every smart person was right about everything they were trying to be right about. Alas, humans are not automatically predisposed to want to be right about the super far mode considerations modern rationalists think to be important.
-4jacob_cannell13y
For many people the word "God" appears to just describe one's highest conception of good, the north pole of morality. Such as: "God is Love" in Christianity. From that perspective, I guess God is Rationality for many people here.

For many people the word "God" appears to just describe one's highest conception of good, the north pole of morality.

People might say that, but they don't actually believe it. They're just trying to obfuscate the fact that they believe something insane.

-5Will_Newsome13y
6Perplexed13y
Trusting ones 'gut' impressions of the "nakedly obvious" like that and 'leaving the details as an exercise' is a perfectly reasonable thing to do when you have a well-tuned engine of rationality in your possession and you just need to get some intellectual work done. But my impression of the thrust of the OP was that he was suggesting a bit of time-consuming calibration work so as to improve the tuning of our engines. Looking at our heuristics and biases with a bit of skepticism. Isn't that what this community is all about? But enough of this navel gazing! I also would like to see that digression on Solomonoff induction in an anthropic situation.
2cousin_it13y
Seconding Kevin's request. Seeing a sentence like that with no followup is very frustrating.
3CronoDAS13y
The post you are looking for is Religion's Claim to be Non-Disprovable
5Perplexed13y
Thx. But I don't read that as arguing against religion. Instead it seems to be an argument against one feature of modern religion - its claim to unfalsifiability (since it deals with a Non-Overlapping MAgisterium, 'NOMA' using the common acronym). Eliezer thinks this is pretty wimpy. He seems to have more respect for old-time religion, like those priests of Baal who stuck their necks out, so to speak, and submitted their claims to empirical testing. Can this attitude of critical rationalism be redeployed against simulationist claims? Or at least against the claims of those modern simulationists who keep their simulations unfalsifiable and don't permit interaction between levels of reality? Against people like Bostrom who stipulate that the simulations that they multiply (without necessity) should all be indistinguishable from the real thing - at least to any simulated observer? I will leave that question to the reader. But I don't think that it qualifies as a posting in which Eliezer argues against religion in toto. He is only arguing against one feature of modern apologetics.
9CronoDAS13y
The other part of the argument in that post is that existing religions are not only falsifiable, but have already been falsified by empirical evidence.
3timtyler13y
A "Truman Show"-style simulation. Less burdensome on the details - but their main application seems likely to be entertainment. How entertaining are you?
7Perplexed13y
I'll have to review your arguments to provide a really well informed response. Please allow me roughly 24 hours. But in the meantime, I know I have seen arguments invoking Occam's razor and "locating the hypothesis" here. I was under the impression that some of those were yours. As I understand those arguments, they apply equally well to theism and simulationism. That is, they don't completely rule out those hypotheses, but they do suggest that they deserve vanishingly low priors.
2timtyler13y
Occam's razor weighs heavily against theism and simulism - for very similar reasons. Probably a bit more heavily against theism, though. That has a bunch of additional razor-violating nonsense associated with it. It does not seem too unreasonable to claim that the razor weighs more heavily against theism.
0Zack_M_Davis13y
"Decoherence is Simple" seems relevant here. It's about the many-worlds interpretation, but the application to simulation arguments should be fairly straightforward.
1Perplexed13y
I'm afraid I don't see the application to simulation arguments. You will have to spell it out. I fully agree with EY that Occam is not a valid argument against MWI. For that matter, I don't even see it as a valid argument against the Tegmark Ultimate Ensemble. But I do see it as a valid argument against either a Creator (unneeded entity) or a Simulator (also an unneeded entity). The argument against our being part of a simulation is weakened only if we already know that simulations of universes as rich as ours are actually taking place. But we don't know that. We don't even know that it is physically and logically possible. Nevertheless, your mention of MWI and simulation in the same posting brings to mind a question that has always bugged me. Are simulations understood to cover all Everett branches of the simulated world? And if they are understood to cover all branches, is that broad coverage achieved within a single (narrow) Everett branch of the universe doing the simulating?
0Zack_M_Davis13y
My thought was that the post linked in the grandparent argues that we should prefer logically simpler theories but not penalize theories just because they posit unobservable entities, and that some simple theories predict the existence of a simulator. Yes, the possibility of simulations is taken as a premise of the simulation argument; if you doubt it, then it makes sense to doubt the simulation argument as well.
2Perplexed13y
Perhaps we are using the word "simple" in different ways. Bostrom's assumption is the existence of an entity who wishes to simulate human minds in a way that convinces them that they exist in a giant expanding universe rather than a simulation. How is that "simple"? And, more to the point raised by the OP, how is it simpler than the notion of a Creator who created the universe so as to have some company "in His image and likeness".
7Zack_M_Davis13y
Bostrom is saying that if advanced civilizations have access to enormous amounts of computing power and for some reason want to simulate less-advanced civilizations, then we should expect that we're in one of the simulations rather than basement-level reality, because the simulations are more numerous. The simulator isn't an arbitrarily tacked-on detail; rather, it follows from other assumptions about future technologies and anthropic reasoning. These other assumptions might be denied: perhaps simulations are impossible, or maybe anthropic reasoning doesn't work that way---but they seem more plausible and less gerrymandered than traditional theism.
0ata13y
Have you read the paper? I'm not convinced of it for a few reasons, but I'd consider it located at least.
0Perplexed13y
Yes, I had read Bostrom's paper. I would express my opinion of that argument using less litotes. But as to locating the hypotheses, I suppose I agree. Which leads me to ask, have you read the catechism? Like most Catholic schoolchildren, I was encouraged to memorize much of it in elementary school, though I have since forgotten almost all of it. It also locates one hypothesis, a hypothesis considerably more popular than Bostrom's.
0wedrifid13y
My new word of the day. It's not a bad one!
-2Will_Newsome13y
(Somewhat related: for those that haven't seen it, Eliezer's Beyond the Reach of God is an excellent article.)
3Perplexed13y
Perhaps I missed the point of your recommendation. That article by Eliezer seems to argue against the existence of a benevolent God who allows evil and death but does not balance this by endowing humans with immortal souls. Since at least 95% of those who worship Jehovah (to say nothing of Hindus) understand the Deity quite differently, I don't really see the relevance. But while I am speaking to you, I'm curious as to whether (in my grandfather comment) I correctly captured the point of your OP?
3Normal_Anomaly13y
From what I've seen, the primary argument for simulationism is anthropic: if simulating a whole universe is possible, then some entity would do it a lot, so there are probably a lot more simulations out there than "basement realities", so we're probably in a simulation. What effect MWI has on this, and what other arguments are out there, I don't know. Typical atheist arguments focus on it not being necessary for god to exist to explain what we see, and this coupled with a low prior makes theism unjustified--basically the "argument from no good evidence in favor". This is fine, because the burden of proof is on the theists. But if you find the anthropic argument for the simulation hypothesis good, then that's one more good argument than theism has.
0Perplexed13y
If creating a whole universe is possible, then some entity would do it a lot, so there are probably a lot more creations out there than "basement realities", so we're probably in a creation. Luckily for the preservation of my atheism, I don't find the 'anthropic argument' for the simulation good. And I put the scare quotes there, because I don't think this is what is usually known as an anthropic argument.

"Powerful aliens" has connotations that may be even more inaccurate; it makes me think of Klingon warlords or something.

4Will_Newsome13y
What I think of as the informal definition of atheism is something like "the state of not believing in God or gods". I believe in gods and God, and I take this into account in my human approximation of a decision theory. I'm not yet sure what their intentions are, and I'm not inclined to worship them yet, but by my standards I'm definitely not an atheist. What is your definition of atheism such that it is meaningfully different from 'not religious'? Why are we throwing a good word like 'theism' into the heap of wrong ideas? It's like throwing out 'singularity' because most people pattern match it to Kurzweil, despite the smartest people having perfectly legitimate beliefs about it. It doesn't really matter, I just think that it's sad that so many rationalists consider themselves atheists when by reasonable definition it seems they definitely are not, even if atheism has more correct connotations than the alternatives (though I call myself a Buddhist, which makes the problem way easier). Perhaps I am not seeing the better definition?
2Document13y
Possibly a bad example, since a number of people here advocate that. I remember a comment somewhere that people at SIAI were considering renaming it for related reasons.
3ata13y
Here's the one I remembered (there may have been a couple of other mentions): (I agree with this, but do not have a better name to propose.)
0Will_Newsome13y
I think they're going to drop the 'for Artificial Intelligence' part, but I think they're keeping the 'Singularity' part, since they're interested in other things besides seed AI that are traditionally 'Singularitarian'. (Side note: I'm not sure if I should use 'we' or 'they'. I think 'they'. Nobody at SIAI wants to speak for SIAI, since SIAI is very heterogenous. And anyway I'm just a Visiting Fellow.) The social engineering aspects of the problem are complicated. Accuracy, or memorability? Rationalists should win, after all...
3TheOtherDave13y
You could go with "it" and sidestep the problem.
-1Will_Newsome13y
Thanks!
1[anonymous]9y
It bothers me when an easily researched, factually incorrect statement is upvoted so many times. There are many different definitions of atheism, but one good one might be: The book does not define personal or transcendent, but it is unlikely that either would exclude "god is an extradimensional being who created us using a simulation" as a theistic argument. For example, one likely definition of transcendent is: Beings living outside the simulation would definitely qualify as transcendent since we have no way of experiencing their universe. To be clear, I am not saying this is the only possible definition of atheism. I am only saying that it is one reasonable definition of atheism, and to claim that it is not a definition, as Eliezer's post has done, is factually incorrect.
0Will_Newsome13y
Most upper ontologies allow no such ontological distinction. E.g. my default ontology is algorithmic information theory, which allows for tons of things that look like gods. I agree with the rest of your comment, though. I don't know what 'worship' means yet (is it just having lots of positive affect towards something?), but it makes for a good distinction between religion and not-quite-religion. Time for me to reread A Human's Guide to Words, I suppose. But in my head and with Visiting Fellows folk I think I will continue to use an ontological language stolen from theism.
1Jack13y
I'm curious to know why you prefer this language. I kind of like it too, but can't really put a finger on why.

Primarily because I get a lot of glee out of meta-contrarianism and talking in a way that would make stereotypical aspiring rationalists think I was crazy. Secondarily because the language is culturally rich. Tertiarily because I figure out what smart people actually mean when they talk about faith, charkras, souls, et cetera, and it's fun to rediscover those concepts and find their naturalistic basis. Quaternarily it allows me to practice charity in interpretation and steel-manning of bad arguments. Zerothly (I forgot the most important reason!) it is easier to speak in such a way, which makes it easier to see implications and decompartmentalize knowledge. Senarily it is more aesthetic than rationalistic jargon.

I get a lot of glee out of meta-contrarianism and talking in a way that would make stereotypical aspiring rationalists think I was crazy

I agree that verbal masturbation is fun, but it's not helpful when you're tying to actually communicate with people. Consider purchasing contrarian glee and communication separately.

1steven046113y
That's a good point, but where do you recommend getting contrarian glee separate from communication?
4Document13y
Cached thoughts: Crackpot Theory (48 readers)? Closet Survey, The Strangest Thing An AI Could Tell You, The Irrationality Game? Omegle?

I wish crackpot theories were considered a legitimate form of art. They're like fantasy worldbuilding but better.

3anon89513y
Here, of course.
0Will_Newsome13y
I agree, though I was describing the case where I can do both simultaneously (when I'm talking to people who either don't mind or join in on the fun). This post was more an example of just not realizing that the use of the word 'theism' would have such negative and distracting connotations.
2Sniffnoy13y
Except I think it's safe to say this sort of thing typically isn't what they mean, merely what they perhaps might mean if they were thinking more clearly. And it's not at all clear how you could find analogs to the more concrete religious ideas (e.g. chakras or the holy trinity). If the person would violently disagree that this is in fact what they intended to say, I'm not sure it can be called "charity of interpretation" anymore. And while I agree steel-manning of bad arguments is important, to do it to such an extent seems to be essentially allowing your attention to be hijacked by anyone with a hypothesis to privilege.

I think Ben from TakeOnIt put it well:

P.P.G. Bateson said:

Say what you mean, even if it takes longer, rather than use a word that carries so many different connotations.

Interestingly, I can't actually think of a word with more connotations than "God". Perhaps this is a function of the fact that:

  1. All definitions of "God" agree that "God" is the most important thing.
  2. There is nothing more disagreeable than what is the most important thing.

There's definitely something deeply appealing about theistic language. That's what makes it so dangerous.

0Jack13y
That advice makes sense for general audiences. Your average Christian might read a version of the Simulation argument written with theistic language as an endorsement of their beliefs. But I really doubt posters here would.
3Perplexed13y
Frank Tipler actually produced a simulation argument as an endorsement of Christian belief. Along with some interesting cosmology making it possible for this universe to simulate itself! (It's easy when the accessible quantity of computronium tends to infinity as the age of the universe approaches its limit.) In Tipler's theory, God may not exist yet, but a kind of Singularity will create Him. Of course, the average Christian has not yet heard of Tipler, nor would said Christian accept the endorsement. But it is out there.
1JoshuaZ13y
One issue I've never understood about Tipler is how he got from theism to Christianity using the Omega Point argument. It seems very similar to the SMBC cartoon Eliezer already linked to. Tipler's argument is a plausibility argument for maybe, something, sort of like a deity if you squint at it. Somehow that then gives rise to Christianity with the theology along with it.
1Document13y
It's worth pointing out that we now know that the universe's expansion is accelerating, which would rule out the omega point even if it were plausible before.
2Perplexed13y
IIRC, Tipler had that covered. A universe of infinite duration allows us to use eons of future time to simulate a single second of time in the current era. Something like the hotel with infinitely many rooms. But please don't ask me to actually defend Tipler's mumbo-jumbo.
2gwern13y
I don't think it can be defended any more. I picked it up a few weeks ago, read a few chapters, and thought, do I want to read any more given that he requires the universe to be closed? Dark energy would seem to forbid a Big Crunch and render even the early parts of his model moot.
4SRStarin13y
Sweet! Wikipedia's image for Physical Cosmology, including your Dark Energy link, is the cosmic microwave background map from the WMAP mission. That was the first mission I worked with NASA. My job, as junior-underling attitude control engineer, was to come up with some way to salvage the medium cost, medium-risk mission if a certain part failed, and to help babysit the spacecraft during the least fun midnight-to-noon shift. Still, it feels good to have been a tiny part of something that has made a difference in how we understand our universe. Disclaimer: My unofficial opinions, not NASA's. Blah, blah, blah.
0Document13y
I think you duplicated my post.
1gwern13y
So I did. Context in Recent Comments unfortunately only reaches so far.
1Jack13y
How does he get from there to Christianity in particular?
7wedrifid13y
If you are assuming infinite computronium you may as well go ahead and assume simulation of all of the conceivable religions! I suppose that leaves you in a position of Pascal's Gang Mugging.
2Will_Newsome12y
That's basically Hindu theology in a nutshell. Or more accurately, Pascal's Gang Maybe Mugging Maybe Hugging.
2fubarobfusco12y
If you assume a Tegmark multiverse — that all definable entities actually exist — then it seems to follow that: All malicious deprivation — some mind recognizing another mind's definable possible pleasure, and taking steps to deny that mind's pleasure — implies the actual existence of the pleasure it is intended to deprive; All benevolent relief — some mind recognizing another mind's definable possible suffering, and taking steps to alleviate that suffering — implies the actual existence of the suffering it is intended to relieve.
-1TheOtherDave12y
It does not follow from the fact that I am motivated to prevent certain kinds of suffering/pleasure, that said suffering/pleasure is "definable" in the sense I think you mean it here. That is, my brain is sufficiently screwy that it's possible for me to want to prevent something that isn't actually logically possible in the first place.
1Perplexed13y
Since religions are human inventions, I would guess that any comprehensive simulation program already produces all conceivable religions. But I'm guessing that you meant to talk about the simulation of all conceivable gods. That is another matter entirely. Even with unlimited computronium, you can only simulate possible gods - gods not entailing any logical contradictions. There may not be any such gods. This doesn't affect Tipler's argument though. Tipler does not postulate God as simulated. Tipler postulates God as the simulator.
0Perplexed13y
I'm not sure. I only read the first book - "Physics of Immortality". But I would suppose that he doesn't actually try to prove the truth of Christianity - he might be satisfied to simply make Christian doctrine seem less weird and impossible.
0SilasBarta13y
Here's a direct comparison of the two that I made.
5steven046113y
There's a buttload of thinking that's been done in this language in earlier times, and if we use the language, that suggests we can reuse the thinking, which is pretty exciting if true. But mostly I don't think it is. (For any discredited theory along the lines of gods or astrology, you want to focus on its advocates from the past more than from the present, because the past is when the world's best minds were unironically into these things.)
0Jack13y
Theres also the opportunity for a kind of metatheology- which might lead to some really interesting insights into humans and how they relate to the world.
-2Will_Newsome13y
Tangentially, it's important to note that most followers of a philosophy/religion are going to be stupid compared to their founders, so we should probably just look at what founders had to say. (Christ more than His disciples, Buddha more than Zen practitioners, Freud and Jung more than their followers, et cetera.) Many people who are now considered brilliant/inspiring had something legitimately interesting to say. History is a decent filter for intellectual quality. That said, everything you'd ever need to know is covered by a combination of Terence McKenna and Gautama Buddha. ;)
6Nornagest13y
This doesn't follow. The founder of a religion is likely to be more intelligent or at least more insightful than an average follower, but a religion of any size is going to have so many followers that a few of them are almost guaranteed to be more insightful than the founder was; founding a religion is a rare event that doesn't have any obvious correlation with intelligence. I'd also be willing to bet that founding a successful religion selects for a somewhat different skill set than elucidating the same religion would.
-1Will_Newsome13y
You're mostly right; upvoted. I suppose I was thinking primarily of Buddhism, which was pretty damn exceptional in this regard. Buddha was ridiculously prodigious. There are many Christians with better ideas about Christianity than Christ, and the same is probably true of Zoroaster and Mohammed, though I'm not aware of them. Actually, if anyone has links to interesting writing from smart non-Sufi Muslims, I'd be interested. This kind of depends on criteria for success. If number of adherents is what matters then I agree, if correctness is what matters then it's probably a very similar skill set. Look at what postmodernists would probably call Eliezer's Singularity subreligion, for instance.
0jacob_cannell13y
There's a serious problem with this in Christianity in that you have to figure out what the founder actual said in the first place, which is very much an open problem concerning Christianity (and perhaps Bhuddism as well but I am less familiar with it at the moment). For example, just this century with the rediscovery of the Gospel of Thomas you get a whole new set of information which is .. challenging to integrate to say the least, and also very interesting. About half of the sayings are different (usually earlier, better) versions of stuff already in the synoptics, but there are some new gems - check out 22: Or 108:
0Desrtopa13y
Those are certainly things that weren't in the bible before that people would have put a lot of work into interpreting if they had been, but "gems" is not the word I'd use.
0Nornagest13y
Point taken. I was thinking of number of adherents.
-2Will_Newsome13y
Also I should note that by 'intelligence' I mostly meant 'predisposition to say insightful or truthful things', which is rather different from g.
0wedrifid13y
Just be careful of true believers that may condemn you for heresy for using the other tribe's jargon! ;) 'Worship' or 'Elder Rituals' could not be reasonably construed as a relevant reply to your thread.
6Will_Newsome13y
Eliezer is trying to define theism to mean religion, I think, so that atheism is still a defensible state of belief. I guess I'm okay with this, but it makes me sad to lose what I saw as a perfectly good word.
5Jack13y
Strongly agree. Better to avoid synonyms when possible. 'Simulationism' is ugly and doesn't seem sufficiently general in the way 'theism' does.
-1[anonymous]13y
I know one isn't supposed to use web comics to argue a point, but I've always found SMBC is the exception to that rule. Maybe not always to get the point across so much as to lighten the mood.
4shokwave13y
When I want to discuss something, I use a relevant SMBC comic to get people to locate the thing I am talking about. I say decision theory ethics, people glaze over. I link this and they get it immediately. Not relevant: when people want to use god-particles, etc, to justify belief in God, I use this. It is significantly more effective than any argument I've employed.
-2Miller13y
Yes. Next. I think this post demonstrates the need for downvotes to be a a greater than 1.0 multiple of upvotes. What argument is there otherwise other than the status quo?
1shokwave13y
To the extent that positive karma is a reward for the poster and an indication of what people desire to see (both very true), we should not expect a distribution about the mean of zero. If the average comment is desirable and deserving of reward, then the average comment will be upvoted.
1Miller13y
I didn't say anything about centering on zero, and agree that would be incorrect. However, modification to the current method is likely challenging and no one's actually going to do any novel karma engineering here so it was a silly comment for me to make.
-3lukstafi13y
[Deleted: Gods "run an intrinsically infinitary inference system".] ETA: agreed, silly.
1shokwave13y
is summarily rejected. What does 'intrinsically infinitary' even mean?
0[anonymous]13y
For example, outside the domain of Goedel's theorems.

This post could use a reminder of Less Wrong's working definition of the supernatural (of which theism, as virtually everyone uses the term, is surely a proper subset): it's something that involves an ontologically basic mental entity. We have no reason to suspect the existence of such things, and the simulation argument -- since it certainly does not appeal to such things -- doesn't change that a bit. Any resemblance to theism is superficial at most.

I'd also be curious to know what popular arguments for atheism you happen to think are so much weaker than you'd expected.

EDIT: ignore that last question if you like, I'm getting a sense for it elsewhere in the thread (though do not really agree).

6jacob_cannell13y
Carrier's definition of supernaturalism as non-reductionist explanations involving ontologically basic mental entities is something of a strawman argument and makes the term somewhat useless. (ie it is not the definition many theists would even argue) The more typical definition of supernaturalism usually refers to events that operate outside of the normal laws of physics. This definition is potentially relevant to simulationism, because a simulator would of course be free to occasionally intervene and violate normal physical 'law' if so desired. Of course, this entity itself would still be reducible to simpler physical processes in it's own universe.
1Sniffnoy13y
But what does that even mean? How are the "normal" laws of physics distinguished from the actual laws of physics?
2jacob_cannell13y
The normal laws of physics being those that predict the universe absent interventions from said external universe, which may include some extraneous special case code. The same physics could describe the whole system of course at some deeper level, so perhaps 'normal' was not quite the right distinction. Limited?

I don't think the implications of accepting the simulation argument on one's worldview are that similar to believing in a supernatural omniscient creator of the universe and arbiter of morality. Absent a ready label for "one who accepts the simulation argument in a naturalistic framework," it's probably more convenient for such people to simply identify as "atheist." Conflating simulationism with theism is only liable to lead to confusion.

0Will_Newsome13y
Voted up and agreed; I often forget that Less Wrong is rightly conscientious about keeping inferential distances imposed by terminological suboptimality to a minimum.

Conflating simulationism with theism is only liable to lead to confusion.

This observation dissolves your post. If you agree with it then repent properly, o' sinner.

1Will_Newsome13y
It doesn't really dissolve what I was actually trying to get at with my post, though; it just means I didn't do a good job at explaining what I was getting at. How do rationalists repent? I have karma to burn...
2Miller13y
I'd say they repent by updating their beliefs, and cleaning up the debris left by their old ones. This is rather similar for rationalists and non-rationalists alike really. Kind of like apologizing for stealing the candy from the drugstore and promising to pay it back..
-1Will_Newsome13y
Hm, that's not a particularly natural fit here... the only beliefs I'd be updating are beliefs about what styles of communication should be normative. Still, it's my style to treat ontological disagreement as a big deal, so I'll update accordingly.
-4jacob_cannell13y
How so? The SA posits an external universe above ours, which although operating according to physics likely identical or very similar to ours, is not at all constrained by our physics. Thus the creator in the SA is quite possibly supernaturally omniscient and omnipotent. Also, whatever utility function/morality we have in our universe, the SA indicates and requires it was purposefully created to some end in the parent universe and may be eventually evaluated according to some external utility function. EDIT: Removed bit about 'new theism' - it has the wrong connotations. This set of conjectures is very similar, but distinct from, traditional theism. Perhaps it needs a new word, but it is a valid domain of knowledge.
4Desrtopa13y
The simulators, should they exist, do not appear to reward belief or worship. We have no reason to regard them as moral authorities, and they do not intervene, with or without appeals. Plus, while the simulators can presumably access all of the data in the simulation, that doesn't mean that they would be able to keep track of it, or predict the results should they interfere in a chaotic system, so there's no reason to suppose that they're functionally omniscient. Unless the superordinate reality is different in some very fundamental ways, it's impossible to predict what happens in chaotic systems in our universe in advance with precision, without actually running the simulation, It does not in any way follow from the simulation argument that our morality was purposefully created by the simulators; by all appearances the simulation, should it happen to be one, is untampered with, and our utility functions evolved. You can build up a religious edifice around simulationism, but like supernatural theism, it requires the acceptance of completely unevidenced assertions.
2JoshuaZ13y
If one can pause a simulation and run it backwards or make multiple copies of a simulation, then from our perspective for many purposes the simulators will be omniscient. There might be still some limits in that regard (for example if they are bound to only do computable operations then they will be limited in what math they can do.) Also, if a simulator wants a specific outcome, and there's some random aspect in the simulation (such as from quantum mechanical effects) they could run the simulation multiple times until they got a result they wanted. This isn't quite true. As I understand it, there are very few results asserting minimal computational complexity of chaotic systems. The primary problem with chaotic systems is that predicting their behavior becomes very difficult if one has anything less than perfect accuracy because very similar initial conditions s can diverge in long-term behavior. That doesn't say much about how hard things are to compute if you have perfect information.
0Desrtopa13y
But running the simulation is running our reality. If they run multiple simulations with slight alterations to get the outcome they want, that's many realities that actually occur which don't achieve the results they want for every one that does. Likewise, rewinding the simulation may allow them to achieve the results they want, but it doesn't prevent the events they don't want from happening to us. Besides, there's no evidence that our universe is being guided according to any agent's utility function, and if it is, it's certainly not much like ours. Chaotic systems are hard to project because small differences between the information in the system and the information in the model propagate to create large differences between the system and the model over time. To make the model perfectly accurate, it must follow all the same rules and contain all the same information. Projecting the simulation with perfect accuracy is equivalent to running the simulation.
0jacob_cannell13y
The SA mechanism places many constraints on the creator. They exist in a universe like ours, they are similar to our future descendants, they created us for a reason, and their utility function, morality, what have you all evolved from a universe like ours.
0jacob_cannell13y
Monte carlo simulation. You don't run one simulation, you run many. There is no one single correct answer that the simulation is attempting to compute. It is a landscape, a multiverse, from which you sample.
0JoshuaZ13y
Sure, but think in terms of observers. From the perspective of the universe that the simulators end up keeping there's only one universe, the one where the simulators got what they wanted. Yes, you've made that point before. I don't disagree with it. I'm not sure why you are bringing it up again. It must contain the same information. It doesn't need to contain the same rules. This isn't true. For example, the doubling map is chaotic. Despite that, many points can have their orbits calculated without such work. For example, if the value of the starting point is rational, we can without much effort always give an exact value for any number of iterations with less computational effort than that in simply iterating the function. There are some complicating factors to this sort of analysis; in particular, if the universe is essentially discrete, then what we mean when we talk about chaos becomes subtle and if the universe isn't discrete then what we mean when we discuss computational complexity becomes subtle (we need to use Blum-Shub-Smale machines or something similar rather than Turing machines). But the upshot is that chaotic behavior is not equivalent to being computationally complex. There have been some papers trying to map out connections between the two (and I don't know that literature at all), and superficially there are some similarities between the two, but if someone could show deep, broad connections of the sort you seem to think are already known that would be the sort of thing that could lead to a Turing Award or a Fields Medal.
0Desrtopa13y
But at any given time you may be in a branch that's going to be deleted or rewound because it doesn't lead to the results that the simulators want. The vast bulk of our experience would be in lines that the simulators don't want. So not only do we have no reason to suppose it's happening, it wouldn't be particularly useful to us if we suppose that the branch the simulators want is better for us than the ones they don't. I concede that my understanding of the requirements to project a simulation of our universe may have been mistaken, but the conclusions jacob cannell drew are still extraneous additions to the simulation argument, not necessary consequences of it.
0jacob_cannell13y
Which are the 'extraneous additions'? Omniscience and omnipotence have already been discussed at length - the SA does not imply perfection in either category on the part of the creator, but this is a meaningless distinction. For all intents and purposes the creator would have the potential for absolute control over the simulation. It is of course much more of an open question whether the creator would ever intervene in any fashion. (I discussed that in length elsewhere, but basically I think future posthumans would be less likely to intervene in our history while aliens would be more likely) Also, my points about the connectedness between morality and utility functions of creator and creation still stand. The SA requires that the creator made the simulation for a purpose in its universe, and the utility function or morality of the creator evolved from something like our descendants.
0JoshuaZ13y
Not necessarily. It would depend on how narrow they wanted things and how often they intervened in this fashion. If such interventions are not very common then the majority of experience will be in universes which are very close to that desired by the simulators. No disagreement there.
-2jacob_cannell13y
Yes, this precisely is the primary utility for the creator. But humans do this too, for intelligence is all about simulation. We created computers to further amplify our simulation/intelligence. I agree mostly with what you're saying, but let me clarify. I am fully aware of the practical limitations, by functionally omniscient, I meant they can analyze and observe any aspect of the simulation from a variety of perspectives, using senses far beyond what we can imagine, and the flow of time itself need not be linear or continuous. This doesn't mean they are concerned with every little detail all of the time, but I find it difficult to believe that anything important, from their perspective, would be missed. And yes of course our morality appears to have evolved through natural genetic/memetic evolution, but the SA chains that morality with the creator's morality in several fashions. First, as we are close to the historical ancestors of the creator, our morality is also their historical morality. And second, to the extent we can predict and model the future evolution of our own descendant's morality, we are predicting the creator's morality. You know: "As man is, god was, as god is, man shall become" I'm not sure about your 'religious edifice', and what assertions are unevidenced.
2JoshuaZ13y
This only makes sense in the very narrow version of the simulation hypothesis under which the simulators are in some way descended from humans or products of human intervention. That's not necessarily the case.
-4jacob_cannell13y
That's true, but I"m not sure if the "very narrow" qualifier is accurate. The creator candidates are: future humans, future aliens, ancient aliens. I think utility functions for any simulator civilizations will be structurally similar as they stem from universal physics, but perhaps that of future humans will be the most connected to our current.
5JoshuaZ13y
No. You are assuming that the simulators are evolved entities. They could also be AIs for example. Moreover, there's no very good reason to assume that the moral systems would be similar. For example, consider if we had the ability to make very rough simulations and things about as intelligent as insects evolved in the simulation. Would we care? No. Nor, would our moral sense in any way match theirs. So now if one has for example some thing that is vastly smarter than humans and lives in some strange 5 dimensional space. It is wondering if star formation can occur in 3-dimensions and if so how it behaved. The fact that there's something resembling fairly stupid life that has shown up on some parts of its system isn't going to matter to it, unless some of it does something that interferes with what the entity is trying to learn (say the humans decide to start making Dyson spheres or engage in star lifting). Incidentally, even this one could pattern match to some forms of theism (For God's ways are not our ways...), which leads to a more general problem with this discussion. Apologetics and theology of most major religions has managed to say so many contradictory things (In this case the dueling claims are that we can't comprehend God's mysterious, ineffable plans, and that God has a moral system that matches ours.) So it isn't hard to find something that pattern matches with any given claim. The primary strong reason to not care about simulationism has nothing to do with whether or not it is has a resemblance to theism, but for the simple reason that it doesn't predict anything useful. There's no evidence of intervention, and we have no idea what probabilities to assign to different types of simulators. So the hypothesis can't pay rent.
-2jacob_cannell13y
AI's don't just magically pop out of nothing. Like anything else under the sun, they systemically evolve from existing patterns. They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas). I would be surprised if future posthumans, or equivalent Singularity-tech aliens, would have moral systems just like ours. On the other hand, moral or goal systems are not random, and are subject to evolutionary pressure just as much as anything else. So as we understand our goal systems or morality and develop more of a science of it, we can understand it in objective terms, how it is likely to evolve, and learn the shape of likely future goal systems of superintelligences in this universe. Your insect example is not quite accurate. There are people right now who are simulating the evolution of early insects. Yes the number of researchers is small and they are currently just doing very rough weak simulation using their biological brains, but nonetheless. Also, our current time period does not appear to be a random sample in terms of historical importance. In fact, we happen to live in a moment which is probably of extremely high future historical importance. This is loosely predicted by the SA. We do have a methodology of assigning probabilities to different types of simulators. First you start with a model of our universe and fill in the important gaps concerning the unobservables - both in the present in terms of potential alien civilizations, and in the future in terms of the shape of our future. Of this set of Singularity level civilizations, we can expect them to run simulations of our current slice of space-time in proportion to it's utility vs the expected utility of simulating other slices of space-time. They could also run and are likely to run simulations of space-time pockets in other universes unlike ours, fictional universes, etc. However a general rule applies - the more dissimilar the simulated universe is to the
4JoshuaZ13y
Again, you are assuming that the entities arise from human intervention. The Simulation Hypothesis does not require that. How is it not accurate? I fail to see how the presence of such research makes my point invalid. This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility. For example, universes that work off of cellar automata would be really interesting despite the fact that our universe doesn't seem to operate in that fashion. This confuses me. Generally, the problem with assigning a prior of zero to a claim is just what you've said here, that it is stuck at zero no matter how much you update with evidence. This is bad. But, you then seem to be asserting that an update did occur due to the simulation hypothesis. This leaves me confused.
-2jacob_cannell13y
Sure, but the SH requires some connection between the simulated universe and the simulator universe. If you think of the entire ensemble of possible universes as a landscape, it is true that any point-universe in that landscape can be simulated by any other (of great enough complexity). However, that doesn't mean the probability distribution is flat across the landscape. The farther away the simulated universe is from the parent universe in this landscape, the less correlated, relevant, and useful it's simulation is to the parent universe. In addition, the farther away you go in this landscape from the parent universe, the set of of possible universes one could simulate expands ... at least exponentially. The consequence of all this is that the probability distribution across potential universes that could be simulating us is tightly clustered around universes similar to ours - different sample points in the multiverse described by our same physics. Of course it is. We simulate systems to predict their future states and make the most profitable decisions. Simulation is integral to intelligence. This has been mathematically formalized in AI theory and AIXI: Intelligence is simulation-driven search through the landscape of potential realizable futures for the path that maximizes future utility.
0JoshuaZ13y
No. See my earlier example with cellular automata. Our universe isn't based on cellular automata but we'd still be interested in running simulations of large universes with such a base just because they are interesting. The fact that our universe has very little similarity to those universes doesn't reduce my utility in running such simulations. That said, I agree that there should be a rough correlation where we'd expect universes to be more likely to simulate universes similar to them. I don't think this necessarily has anything to do with utility though, more that entities are more likely to monkey around with the laws of their own universes and see what happens. Due to something like an anchoring effect, entities should be more likely to imagine universes that are in some way closer to their own universe compared to the massive landscape of possible universes. But, that similarity could be so weak as to have little or no connection to whether the simulators care about the simulated universe.

How low a percentage does one need to assign a claim in order to declare it to be closed? I'd assign around a 5% chance that there exists something approximating God (using this liberally to include the large variety of entities which fall under that label). I suspect that my probability estimate is higher than many people on LW. (Tangent: I recently had a discussion with an Orthodox Jewish friend about issues related to Bayesianism, and he was surprised that I assigned the idea that high a probability. In his view, if he didn't have faith and had to assign a probability he said it might be orders of magnitude lower.) So how low a probability do we need to estimate before we consider something closed?

Moreover, how much attention should we pay to apologetics in general? We know that theology and apologetics are areas that have spent thousands of years of memetic evolution to be as dangerous as possible. They take almost every little opportunity to exploit the flaws in human cognition. Apologetic arguments aren't (generally) basilisk level, but they can take a large amount of cognitive resources to understand where they are wrong. After 10 or 15 of them, how much effort do we need ... (read more)

4b1shop13y
Perhaps a question becomes a closed issue not when the probability of the belief reaches a certain point, but when our estimate of the probability of the belief changing reaches a certain threshold. A fair coin is heads 50% of the time, and my probability won't change. That's a closed question. I may be fairly confident about the modern theory of star formation, but I wouldn't be too surprised if a new theory added some new details. So it's not a closed subject. I can imagine no evidence that would lead me to believe in something nonfalsifiable. Theism is a closed subject.
0JoshuaZ13y
You say that you can't imagine evidence that would cause you to believe in something nonfalsifiable. But then seem to apply that the theism in general. I'm curious. If say, almost all the evangelical Christians in the world disappeared along with all the world's children, would you not assign a substantial probability to the Rapture having just taken place?
0b1shop13y
Fair point. Some religions make falsifiable claims. But my point still stands. I assign a low probability to the rapture happening -- even lower than there being a xian God, so I don't put much weight into the idea my religious beliefs will change. The people who take the rapture seriously do so because they also believe in nonfalsifiable things.
1lionhearted (Sebastian Marshall)13y
This comment is brilliant. In particular, I'd really really love to see two top level posts covering: ...and... Both really fascinating insights, I'd love to read more. Especially the first one about memetic evolution to be dangerous - I wonder what various secular social and societal memes fit in similarly.
-9Will_Newsome13y

Didn't we have this conversation already? Words can be wrong. You can't easily divorce an existing word from its connotations, not by creating a new definition, certainly not by expecting the new definition to be inferred by the reader. There is no good reason to misuse words in this way, just state clearly what you intended to say (e.g. as komponisto suggested).

As it is, you are initiating an argument about definitions, activity without substance, controversy for the sake of controversy as opposed to controversy demanded by evidence.

3Will_Newsome13y
That was a different conversation, though the same theme of using words incorrectly also came up, if that's what you mean. There are good reasons to do so among people who share the same language, like me and some SIAI folk. It makes communication faster, and makes it easier to see single step implications. Being precise has large consequences for brains that run largely on single step insights from cached knowledge. I agree that in the case of this post my choice of language was flat out wrong, though. Arguments about definitions are very important! Choosing a language where it's easier to see implications is important for bounded agents. That said, it wasn't what I was trying to do with this post, and you're right that it would have been a totally lost cause if that's what I was trying to do.
2[anonymous]13y
To take advantage of this one might want to compress cached knowledge as much as possible; the resulting single step insights would then have correspondingly greater generality. Using structured personal knowledge databases along with spaced repetition would be one way of accomplishing this.

The basic problem of specific agent-created-this-universe hypotheses is that of trying to explain complexity with greater complexity without a corresponding amount of evidence. Things like the Simulation Argument and other notions of "agenty processes in general creating this universe" are certainly not as preposterous as theistic religion, particularly in the absence of a good understanding of how existence works, but I think it confuses things to refer to this as theism. If our universe is a simulation developed by a computer science undergrad (from another reality) for a homework assignment, then that doesn't make them our God.

I recall a while ago that there was a brief thread where someone was arguing that phlogiston theory was actually correct, as long as you interpret it as identical to the modern scientific model of fire. I react to things like this similarly: theism/God were silly mistakes, let's move on and not get attached to old terminology. Rehabilitating the idea of "theism" to make it refer to things like the Simulation Hypothesis seems pointless; how does lumping those concepts together with Yahweh (as far as common usage is concerned) help us think about the more plausible ones?

-2jacob_cannell13y
The SA is not a new theory of physical theory that requires new evidence or fills in gaps in the current evidential record. It's more of a metaphysical revelation or model update based on consequences of the future as modeled by current theory. For example, imagine a world consisting of just Adam and Eve on an island. They eat fruit and live a peaceful existence, learning what they can to the limits of their observations. Based on the available evidence, they assume that they spontaneously appeared out of the ocean. Sometime much later Eve becomes pregnant and gives birth to a child which begins to slowly change into something resembling it's parents. At this point Adam and Eve have enough data to predict that they will spawn children which become likenesses of themselves, and it is also reasonable to conclude that they themselves originated from this process and have parents somewhere, rather than having crawled out of the ocean. Our planet is 'pregnant' with a developing noosphere/technosphere which we can predict will eventually spawn many child universes very much like our own. Any civilization or being powerful enough to simulate our reality is a god to us in every useful sense that the term god has ever had meaning. Confusing future reality-simulating posthuman descendants with modern day grad students is like confusing bacterial DNA with the internet.

I am interested in why you want to call simulation arguments, Tegmark cosmology, and Singularitarianism theism. I don't doubt there is a reference class that includes common-definition theistic beliefs as well as these beliefs; I do doubt whether that reference class is useful or desirable. At that point of broadness I feel like you're including certain competing theories of physics in the class 'theism'.

So I propose a hypothetical. Say LessWrong accepts this, and begins referring to these concepts as theistic, and renouncing their atheism if their Tegmarkian cosmological beliefs are stronger. What positive and what negative consequences do you expect from this?

Many folk here on LW take the simulation argument (in its more general forms) seriously. Many others take Singularitarianism1 seriously. Still others take Tegmark cosmology (and related big universe hypotheses) seriously. But then I see them proceed to self-describe as atheist (instead of omnitheist, theist, deist, having a predictive distribution over states of religious belief, et cetera), and many tend to be overtly dismissive of theism.

The word "but" in the last sentence is a non-sequitur if there ever were one. Tegmark cosmology is not theism. Theism means Jehovah (etc). Yes, there are people who deny this, but those people are just trying to spread confusion in the hope of preventing unpleasant social conflicts. There is no legitimate sense in which Bostromian simulation arguments or Tegmarkian cosmological speculations could be said to be even vaguely memetically related to Jehovah-worship.

The plausibility of simulations or multiverses might be an open question, but the existence of Jehovah isn't. There's a big, giant, huge difference. If we think Tegmark may be correct, then we can just say "I think Tegmark may be correct". There is no need to pay any lip-service to ancient mistakes whose superficial resemblance to Tegmark (etc) is so slight that you would never notice it unless you were motivated to do so, or heard it from someone who was.

9Anatoly_Vorobey13y
Isn't this - I'm sorry if that sounds harsh - arguing by a forceful say-so? Sure, if you constrain theism rhetorically to "Jehovah-worship", that practice doesn't sound very similar to the Bostromian arguments. But "Bostromian arguments/Tegmarkian speculations" and "the claim that a god created the universe" sound pretty much memetically related to me. You're saying that e.g. "we are living in a simulation run by sentient beings" and "we are living in a universe created by a sentient being" are such wildly different ideas that there's only superficial resemblance between them, and even that resemblance is unlikely to be noticed by anyone just thinking about the issue, and is rather spread as a kind of a perverse meme. Methinks thou dost protest too much. The earliest time I can remember that anyone drew a very explicit connection between simulations and theism is in Stanislaw Lem's short story about Professor Corcoran. The book was originally published in 1971, when Bostrom was -2 years old. It's in the second volume of his Star Diaries; see "Further Reminiscences of Ijon Tichy: I" in this (probably pirated) scribd doc. I'd recommend it to anyone. Of course, it's very much possible that Lem wasn't the first to write up the idea.
1komponisto13y
See Religion's Claim to be Non-Disprovable for discussion of what religion is and how it arose. By "memetically related" I do not mean "memetically similar" (although I don't think there's much similarity either); I mean "related" in the sense of ancestry/inheritance. Bostrom's and Tegmark's arguments are not a branch of religion; they do not belong in that cluster. No. The implication of the post, as I perceived it (have a look at its first paragraph) was "you guys shouldn't be so confident in your dismissal-of-religion ('atheism'); after all, you (perhaps rightly) are willing to entertain the ideas of Tegmark!" Surely you understand what is wrong with this. You think I don't believe what I'm writing?
7Anatoly_Vorobey13y
I think you're wrong on similarity [1] and irrelevant on ancestry/inheritance. Only some among currently active religions are clearly "related" in the sense you employ (e.g. Judaism and Christianity); there's no strong evidence that most or all are so related. Since you presumably have no problem lumping them together under "religion", the claim that BTanism (grouped and named so purely for convenience) has no common ancestry with these religions is irrelevant to whether it should be judged a religion. Also, I don't read the post as claiming "you guys are so dismissive of religion, but you're big on BTanism which is just as much a religion, so there!". Instead, I read the post as claiming "you guys are unreasonable in your overt dismissal of theism and your forceful insistence on it being a closed question, considering many of you are big on BTanism which has similar epistemological status to some varieties of theism". So it doesn't matter much whether BTanism is a religion or not; if that bothers you too much, just employ Taboo and talk about something like "a sentient being responsible for the creation of the observable universe" instead. I don't fully agree with this idea (the post's argument as I read it), but I find myself somewhat sympathetic to it. It is indeed true in my opinion that the overt and insistent dismissal of theism on LW is a community-cohesiveness driven phenomenon. There's illuminating prior discussion at The uniquely awful example of theism. No, I have no doubt that you believe what you're writing. Rather, I think that the strongly dismissive claims in your first comment in the thread, unbacked by any convincing argument or evidence, cause me to think that a strong cognitive bias is at work. [1] Really, the similarity is so strong that I see no need for a detailed argument; but if one is desired, I think Lem's story, to which I linked earlier, serves admirably as one.
5komponisto13y
This does not follow. It is not necessary for my argument that different religions all be related to each other; it is only necessary that BTanism not be related to any of them, and (this part I asserted implicitly by linking to Religion's Claim to be Non-Disprovable) that it not have been generated by a similar process. Varieties of "theism" which have similar epistemological status to BTanism are not subject on LW to the same kind of dismissal as religion, to the best of my knowledge. Nor should they be. But for the sake of avoiding confusion and undesirable connotations, they certainly shouldn't be called "theism". If what you mean here is "merely community-cohesiveness driven phenomenon", then I disagree entirely. You might have been right if this were RichardDawkins.net or another specifically atheism-themed community, but it isn't. This is Less Wrong. Our starting point here is epistemology. Rejection of religion ("theism") is a consequence of that; the rejection may be strong but it is still incidental. For my part, I see "open-mindedness" toward theism mostly as manifesting an inability to come to gut-level terms with the fact that large segments of the human population can be completely, totally wrong. The next biggest source after that is Will's problem, which is the pleasure that smart people derive from being contrarian and playing verbal and conceptual games. (If you like that, for goodness' sake be an artist! But keep your map-territory considerations pure.) Which? Again, this is Less Wrong, not a random internet forum. It is not possible to recapitulate the Sequences in every comment; that doesn't mean that strong opinions whose justifications lie therein are inadequately supported.
0Anatoly_Vorobey13y
OK, I think I now understand the implicit part; I think you mean that religions of old made total, and not merely ontological, claims, which BTanism doesn't (I wasn't sure before what you were picking up from Religion's Claim to be Non-Disprovable, which I do know and read before; I thought it had something to do with disprovability). I think you're right to point to that distinction. Well, why not, if they're varieties of theism? Perhaps it'd be better if LW found another word to condemn, other than theism? Such a word could be... theism! It does have two definitions, a broad and a narrow one. I checked a few dictionaries to be sure, and one of them helpfully elucidated the broad one as "the opposite of atheism", and the narrow one as "the opposite of deism". "Largely", rather than "merely", is how I would put it. I'm not certain I understand the rest of your paragraph. To my mind, atheism (or, more precisely, strong dismissal of theism) being incidental to LW's charter doesn't mean it can't become a way to cohere the group, to nurture a sense of belonging. Note, by the way, that rejection of theism made it to the Welcome post, and is a unique example of a specific shared LW value there. Although that may be for pragmatic rather than signalling reasons. That's an interesting theory I'd have to think about. Do you consider agnosticism as a subset of "open-mindedness", and thus the above as the primary explanation of agnosticism? I don't know; there are several possibilities and it'd be impolite, not to mention fruitless, on my part to speculate. Agreed in general. Not sure how well this applies in the particular case. This thread has focused on two assertions in your original comment: "[not] memetically related" and "superficial resemblance ... is so slight that you would never notice it unless you were motivated to do so, or heard it from someone who was". You cited a Sequence post in your follow-up comment about the former (but I don't see any reference to
5prase13y
The lumping together of religions under the category of "religion" isn't based on common ancestry, and neither it is based solely on "universe was created by god(s)". Religions have much more in common, e.g. reliance on tradition, sacred texts, sacred places, worship, prayer, belief in afterlife, claims about morality, self-declared unfalsifiability, anthropomorphism, anthropocentrism. Saying that simulation arguments belong to the same class as Judaism, Hinduism or Buddhism because they all claim that the world was created by intelligent agents is like putting atheism to the same category because it is also a belief about gods.

You're making good points, with which I largely agree, with some reservations (see below). I'd just point out that this wasn't the argument Komponisto was making - he was talking only about relatedness in the ancestry sense.

Your list of attributes is probably good enough to distinguish e.g. a simulation argument from "religions" and justify not calling it one. There are two difficulties, however. One is that adherence to these attributes isn't nearly as uniform among religions as it's often rhetorically assumed on LW to be. There's a tendency to: start talking about theism; assume in your argument that you're dealing with something like an omnipresent, omniscient monotheistic God of Judaism/Christianity whose believers are all Bible literalists; draw the desired conclusion and henceforth consider it applying to "theism" or "religion" in general. I find this fallacious tendency to be frequent in discussions of theism on LW. This comment from the earlier discussion is relevant, as are some other comments there. In this post, Eliezer comments that believing in simulation/the Matrix means you're believing in powerful aliens, not deities. Well, consider anc... (read more)

5CronoDAS13y
The Greek gods were, in fact, immortal. Other gods could wound or imprison them, but they couldn't be killed. The Norse gods, on the other hand, could indeed die, and were fated to be destroyed in the Ragnarok.
2Anatoly_Vorobey13y
Thanks! I'm not sure how come I was confused about this, but it's great to be corrected.
4prase13y
I know, nevertheless still I wanted to stress that we don't define religion by a single criterion. Therefore I haven't listed omni-qualities, immortality and ontological distinctiveness among my criteria for religion. If you look at those criteria, the Greek religion satisfied almost all, save perhaps sacred texts and claims of unfalsifiability (seems that they have not enough time to develop the former and no reason for the latter). Religion usually surpasses the question of existence and identity of gods. (Now we can make distinction between religion and theism, with the latter being defined solely in terms of god's existence and qualities. I am not sure yet what to think about that possibility.) The line is not sharp, of course. Many people argue that Marxism is a religion, even if it explicitly denies god, and may have based that opinion on good arguments. It is also not enough clear what to think about Scientology. Religion, or simply cult? I don't think the classification is important at all. No, I haven't. Actually my approach to simulation arguments is not much different from my approach to modern vague forms of theism: I notice it, but don't take it seriously. It depends. Belief in importance, hidden message, or even literal truth of ancient texts is generally more reliable indicator of practical irrationality than having an opinion about some undecidable propositions is.
0Anatoly_Vorobey13y
I think we've converged on violent agreement, except one point: You're right. I retract this part.
1prase13y
I like the phrase.
-1jacob_cannell13y
So if I may take the implication: you don't take the SA seriously because . . it seems memetically similar to ideas espoused or held by agents you deem irrational? Do you believe in calculus? Gravitation?
0prase13y
I though it was clear from the previous discussion that the reason was pretty weak testability of simulationism, rather than ad hominem reasoning.
0Desrtopa13y
Conflating simulationism with calculus or gravitation is absurd. Our universe would look very different if calculus or gravitation did not exist as we understand them, whereas we have no reason at all to suppose this is true of the simulation argument. There are statistical arguments for supposing it's true, but not all the assumptions in the mathematical model are given, and it increases the complexity of our model of reality without providing any explanatory power.
0jacob_cannell13y
Calculus is a generic algorithmic tool, gravitation is an algorithmic predictive model of some subset of reality, simulationism is a belief about reality derived from future predictions of current physical theory. Yes these are distinct epistemological categories, my point was more that the similarity of simulationism to the older theism is an inadequate reason to dismiss simulationism. This is I believe a common misunderstanding about the SA. Suppose you are given a series of seemingly random numbers - say from a SETI signal. You put a crack team of mathematicians on it for many years and eventually they develop a complex model for the sequence that can predict it. It also appears that you can derive timing from the signal and determine how long it has been progressing. Then later you are able to run the model forward and predict that it in fact eventually repeats itself . . . That last discovery is not a change to the model that need be justified by Ockham's razor. It does not add one iota to the model's complexity. The SA doesn't add an iota of complexity to our model of reality - ie physics. It's a predicted consequence of running physics forward.
3JoshuaZ13y
Not necessarily. Given our understanding of the laws of physics, simulating our universe inside itself would be tough. Note that nothing in the simulation hypothesis requires that we are being simulated in a universe that has much resemblance to our apparent universe. (Digression: Even small amounts of monkeying with the constants of the universe can make universes that can plausibly give rise to life. See here (unfortunately everything beyond the summary is behind a paywall). And in some of those cases, it seems plausible that large scale computation might be easier. If certain inflationary models are correct then there should be lots of different universal bubbles with slightly different physical laws. Some of those could be quite hospitable to large-scale computation.)
0Desrtopa13y
The simulation argument isn't a predicted consequence of running physics forward; the scenario you put forward doesn't establish that we exist in a simulation, just that our universe follows predictable rules that can be forward computed. Postulating an entire universe outside the one we observe does add to the complexity of that model. The simulation argument is a probabilistic argument that states that if certain assumptions hold then most apparent universes are in fact simulated by other universes, and thus our own is probably a simulation.
0jacob_cannell13y
Not so at all. A model's complexity is not determined by the entities it references or postulates. For example, I have a model of the future which postulates new processors every few years. The model is not complex enough to capture every new processor from here to infinity. Nor does it need to be. The model is simple, yet it can generate new postulated entities. You in effect are saying that my model, which postulates many new future processors, is somehow more 'complex' than a model which postulates just three, or none.
0Desrtopa13y
An entire external universe adds to the complexity of the model, not just how many entities the model contains. This may not be the case if the simulation itself was produced in the universe as we know it, and our own apparent universe is only a simulated fragment. That isn't what I thought you were asserting, but that is untenable for completely separate reasons.
0jacob_cannell13y
What do you mean by complexity and how is it at all relevant? Take Conway's life for example. Tons of apparent complexity can emerge from rules simple enough to write on a bar napkin. Was the copernican model 'wrong' because it made our universe-model more complex? Was the discovery of multiple galaxies wrong for similar reason? Many worlds? The only formal definition of complexity that is well justified is algorithmic complexity and it has some justification as a quality metric for deciding between theories in terms of Solonomoff Induction. The formal complexity of a universe-model is that of it's simplest reduction. The simplest reduction for any scientific model is universal physics. So there is only one model, all complexity emerges from it, and saying things like "your premise X adds to the complexity of the model" is untrue and equivalent to saying your "premise X makes the model smell bad".
2Desrtopa13y
Adding a universe external to this one doesn't just add more stuff. To take the Conway's Game of Life example, suppose that you simulated an entire universe inside it, from the beginning. For the inhabitants, a model that not only explained how their universe worked, but postulated the existence of our universe, would be more complex than one that merely explained their own. With evidence that their reality was a simulation, the proposition could be made more likely than the proposition that it stood alone. In terms of minimum message length, having to describe another universe superordinate to your own adds to the information of the model, not just the entities described in it. The addition of our own universe could not be encapsulated in a model that simply describes the working of the simulated Conway universe from the inside without adding more information.
2Jack13y
Once you have a model that includes a universe and the capacity to simulate universes you can add universes to the model without taking much more complexity because the model can be recursively defined. The minimum message length need not be increased much to add new universes, you just edit the escape clause. Where we are in the model doesn't matter.
2Perplexed13y
You seem to be thinking in terms of time complexity. Space complexity also needs to be considered. It seems axiomatic to me that an outer universe simulation can only contain nested universe simulations of lower space complexity than than itself. If I am wrong, is there some discussion of this kind of issue online or in a well-know paper or textbook?
0JoshuaZ13y
This only follows if your universe can not only model other universes but can easily model universes that share its own rules of physics. This is a much stronger claim about the nature of a universe (for example, it seems likely that this is not true about our universe.)
0jacob_cannell13y
The SA does not 'add' a universe external to the model. The SA is a deduction derived from the Singularity-model. The Singularity-model does not 'add' the external universes either, they emerge within it naturally, just as naturally as future AI's do. That would only be true if their model was not also a full explanation of our universe, and thus isomorphic to some historical slice of our universe. Not at all. The Singularity-model is a scientific extrapolation of our observed history into the future. As it is scientific, it reduces to physics (the model approximates what we believe would happen if we could simulate physics into the future). The SA is not a model at all. It is a deduction which can be simplified down to: If the Singularity-model is accurate. Then most observable universes are simulations. And thus our observable universe is a simulation. You seem to think the minimum message length is somehow physics + extra simulations scrawled in. The physics generates everything, so it's already minimal. No - but only because the physics differ substantially. You are right of course that if Conway beings evolved and somehow they had some singularity of their own in their future that generated simulated Conway universes, they would establish a lower prior to believing they were embedded in a String/M-theory universe like ours. (they of course could still be wrong, as complexity is just a reasonable bias measure). They'd attach higher credence to being embedded in a Conway universe. But if the simulated universe is based on the same physics, then it reduces to exactly the same minimal program, and it absolutely describes both universes. This is very similar to the multiverse in physics and the space of universes string/M-whatever theory can generate.
2Desrtopa13y
As I mentioned before, I thought you were arguing the orthodox simulation argument, rather than one where the simulations are created from within our own universe. That would not necessarily increase the complexity of the model, but it's untenable for its own reasons. For one thing, it's far from given that any civilization would ever want to simulate the universe at a previous point; the reasons you provided before don't remotely justify such a project; it's not a practical use of computing power. For another, assuming you're only simulating small fractions of the history of existence, the majority of all sentient beings in the universe would not be ones in a simulation. In fact, you would have to defy a number of probable assumptions about our universe to fit as much universe space and time in the simulation as existed outside it.
2jwhendy12y
That. I think after all the comments I've scanned in this post, this was the first one where I really felt like I understood what the post was even really about. Thank you.
2jacob_cannell13y
The OP does not make mention of the term 'religion'. Part of the confusion seems to stem from the conflation of theism and religion. Theism is a philosophical belief about the nature of reality. The truthfulness of this belief as a map of reality is not somehow dependent or connected in belief space to magic rituals, prayers, voodo dolls or the memes of organized religion, even if they historically co-occur.
2komponisto13y
I beg to differ. In my view, the conflation is of theism with simulationism.
2BecomingMyself13y
The way I read it, it seems like Will_Newsome is not using the word in this way. It may be a case of two concepts being mistakenly filed into the same basket -- certainly some people might, when they hear "Theism-in-general is a mistaken and sometimes harmful way of thinking about the world", understand "theism-in-general" to mean "any mode of thought that acknowledges the possibility of some intelligent mind that is outside and in control of our universe". Under this interpretation, the assertion is quite obviously false (or at least, not obviously true). I wonder if there is still a disagreement if we Taboo "theism"? (Though your point in the last paragraph is a good one, I think.)
4komponisto13y
Indeed not; hence my criticism!
0jacob_cannell13y
For some reason you seem to be categorizing the belief-space such that there is a little pocket called Jehovah-ism over here and then simulationism is another distinct island far far away. The way I see it, theism is a whole vast space of belief-space, roughly dividing from the split based on the question: was the observable universe created by an agenty-process? The SA leads us into that side of the belief-space, but the type of Jehova-ism you mention is just a little slice of a large territory.
2Desrtopa13y
The two may branch in the same direction from that question, but that doesn't mean that their consequences are remotely similar. You seem to be substituting in cached thoughts from religion as the consequences of simulationism when they really don't follow from it.
0jacob_cannell13y
Such as?
0Desrtopa13y
Such as the morality of the simulators having any relation to our own. It would be much easier to simulate a universe from big bang conditions starting with a few basic rules and allow it to evolve from on its own than to deliberately engineer any sort of life forms within it, and the basic rules of our universe do not dictate that any intelligent life form needs a utility function that closely resembles our own. Assuming it would even be practical for the simulators to single us out for observation, as such a miniscule part of the simulation, and they judge us according to their own utility function, it's a big leap to suppose that they would do anything about it with repercussions inside our own universe, so for our purposes it probably wouldn't matter. Additionally, it's not established that the simulators would have practical control over the simulation. Given JoshuaZ's arguments, I concede that it's theoretically possible that the simulators could predict the output of the simulation in advance without running it, but that doesn't mean it's probable, let alone given.
0jacob_cannell13y
I suspect that a full universe simulation of all of space-time, fifteen billion years of an entire universe, may have a cost complexity such that it could never be realized in any currently conceivable computer due to speed of light limitations. Even a galaxy sized black hole may not be sufficient. You are talking about a Tipler-like scenario that would probably require some massive re-engineering of the entire universe. I can't rule this out, but from what I've read of astrophysicist's reactions, it is questionable whether it is possible even in principle to collapse the universe in the fashion required. (Tipler figures it requires tachyons in his later response writings) So no, that would not be much easier to simulate - it would be vastly more difficult, and may not even be possible in principle. The more likely simulation is one run by our posthuman ancestors after a local Singularity on earth, where they have a massive amount of computation, enough to simulate perhaps a galaxy or galaxies full of virtual humans, but not the entire history of our universe. We must remember that they will want to simulate many possible samples as well. They will also probably simulate hypothetical aliens and hypothetical contact scenarios. Basically they will simulate future important sample time-slices. Today humanity as a whole spends a large amount of time thinking about the present, slightly alternate versions of the present, historical time heavily weighted based on importance, and projected futures. We already are engaging in the limited creation of simulated realities. The phenomenon has already begun, it started with dreams, language, thought and is more recently amplified with computer simulation and graphics, and just chart that trajectory out into the future and amplify it by an exponential vastening . . . .
0Desrtopa13y
This is not the ordinary simulation argument, or even closely related to it. The proposition that you reject, that our universe is simulable in its entirety, is one of the premises of that argument. I for one strongly predict that our future ancestors will never create a galaxy or multiple galaxies of virtual humans from their own past. It's ethically dubious, and far, far from being one of the most useful things they could do with that computing power if they simply want to determine the likely outcome of various contact scenarios or the what hypothetical aliens would be like. By the time we're capable of it, it simply wouldn't have much to recommend it as an idea.
0Will_Newsome13y
I didn't mean to talk about Jehovah specifically; I thought that using 'theism' would imply enough generality that I could get away without clarification, but I was obviously very mistaken. I added a sentence to the end of the post. Your second paragraph seems to correctly point out a problem with my terminology. Nonetheless perhaps we could also have discussion on what I was (admittedly poorly) trying to start a discussion about, that is, the apparent contradiction between believing strong optimization processes outside the observable universe are possible and believing that such an optimization process didn't create the observable universe?
3komponisto13y
Nor, for that matter, did I: Zeus, Thor, and their innumerable counterparts should be considered included in the reference. The way to have done that, in my opinion, would have been to title the post "Simulation/creator arguments" or something similar, and to avoid any mention of theism, atheism, or religion in the body of the post.
1wedrifid13y
It was brave to even consider using a concept within a few inferential leaps from Jehovah here. :)

The only fact necessary to rationally be an atheist is that there is no evidence for a god. We don't need any arguments -- evolutionary or historical or logical -- against a hypothesis with no evidence.

The reason I don't spend a cent of my time on it is because of this, and because all arguments for a god are dishonest, that is, they are motivated by something other than truth. It's only slightly more interesting than the hypothesis that there's a teapot around venus. And there are plenty of other things to spend time on.

As a side note, I have spent time on learning about the issue, because it's one of the most damaging beliefs people have, and any decrease in it is valuable.

9Will_Newsome13y
I contend that there is evidence for a god. Observation: Things tend to have causes. Observation: Agenty things are better at causing interesting things than non-agenty things. Observation: We find ourselves in a very interesting universe. Those considerations are Bayesian evidence. The fact that many, many smart people have been theistic is Bayesian evidence. So now you have to start listing the evidence for the alternate hypothesis, no? Do you mean all arguments on Christian internet fora, or what? There's a vast amount of theology written by people dedicated to finding truth. They might not be good at finding truth, but it is nonetheless what is motivating them. I should really write a post on the principle of charity... I realize this is rhetoric, but still... seriously? The question of whether the universe came into being via an agenty optimization process is only slightly more interesting than teapots orbiting planets? I agree that theism tends to be a very damaging belief in many contexts, and I think it is good that you are fighting against its more insipid/irrational forms.
6shokwave13y
I can't help but feel that this sentence pervasively redefines 'interesting things' as 'appears agent-caused'.
0DSimon13y
As curious agents ourselves, we're pre-tuned to find apparently-agent-caused things interesting. So, I don't think a redefinition necessarily took place.
2shokwave13y
This is sort of what I meant. I am leery of accidentally going in the reverse direction - so instead of "thing A is agent-caused -> pretuned to find agent-caused interesting -> thing A is interesting" we get "thing A is interesting -> pretuned to find agent-caused interesting -> thing A is agent-caused". This is then a redefinition; I have folded agent-caused into "interesting" and made it a necessary condition.
6Alex_Altair13y
I suppose that their ratio is very high, but that their difference is still extremely small. As for your evidence that there is a god, I think you're making some fundamentally baseless assumptions about how the universe should be "expected" to be. The universe is the given. We should not expect it to be disordered any more than we should expect it to be ordered. And I'd say that the uninteresting things in the universe vastly outnumber the interesting things, whereas for humans they do not. Also, I must mention the anthropic principle. A universe with humans much be sufficiently interesting to cause humans in the first place. But I do agree that many honest rational people, even without the bias of existent religion, would at least notice the analogy between the order humans create and the universe itself, and form the wild but neat hypothesis that it was created by an agent. I'm not sure if that analogy is really evidence, anymore than the ability of a person to visualize anything is evidence for it.
2Jack13y
You can't just not have a prior. There is certainly no reason to assume the the universe as we have found has the default entropy. And we actually have tools that allow us to estimate this stuff- the complexity of the universe we find ourselves in is dependent on a very narrow range values in our physics. Yes I'm making the fine-tuning argument and of course knowing this stuff should increase our p estimate for theism. That doesn't mean P(Jehovah) is anything but minuscule-- the prior for an uncreated, omnipotent, omniscient and omni-benevolent God is too low for any of this to justify confident theism.
5steven046113y
Some of it anyway.
-1Will_Newsome13y
Isn't it interesting how there's so much raw material that the interesting things can use to make more interesting things?
5[anonymous]13y
Really? Your explanation for why there's lots of stuff is that an incredibly powerful benevolent agent made it that way? What does that explanation buy you over just saying that there's lots of stuff?
4DSimon13y
Again, some of it. The vast vast majority of raw material in the universe is not used, and has never been used, for making interesting things.
-1Will_Newsome13y
Why are you ignoring the future?
4Perplexed13y
Back when I used to hang around over at talk.origins, one of the scientist/atheists there seemed to think that the sheer size of the universe was the best argument against the theist idea of a universe created for man. He thought it absurd that a dramatic production starring H. sapiens would have such a large budget for stage decoration and backdrops when it begins with such a small budget for costumes - at least in the first act. Your apparent argument is that a big universe is evidence that Someone has big plans for us. The outstanding merit of your suggestion, to my mind, is that his argument and your anti-argument, if brought into contact, will mutually annihilate leaving nothing but a puff of smoke.
0DSimon13y
Are you proposing that in the future we will necessarily end up using some large proportion of the universe's material for making interesting things? I mean, I agree that that's possible, but it hardly seems inevitable.
3timtyler13y
I think that is more-or-less the idea, yes - though you can drop the "necessarily ". Don't judge the play by the first few seconds.
4DSimon13y
The reason I put in "necessarily" is because it seems like Will Newsome's anthropic argument requires that the universe was designed specifically for interesting stuff to happen. If it's not close to inevitable, why didn't the designer do a better job?
2timtyler13y
Maybe there's no designer. Will doesn't say he's 100% certain - just that he thinks interestingness is "Bayesian evidence" for a designer. I think this is a fairly common sentiment - e.g. see Hanson.
-2Will_Newsome13y
Necessarily? Er... no. But I find the arguments for a decent chance of a technological singularity to be pretty persuasive. This isn't much evidence in favor of us being primarily computed by other mind-like processes (as opposed to getting most of our reality fluid from some intuitively simpler more physics-like computation in the universal prior specification), but it's something. Especially so if a speed prior is a more realistic approximation of optimal induction over really large hypothesis spaces than a universal prior is, which I hope is true since I think it'd be annoying to have to get our decision theories to be able to reason about hypercomputation...
5[anonymous]13y
Yes!
3Document13y
Possible prior work: Why and how to debate charitably, by User:pdf23ds.
4Perplexed13y
Your choice of wording here makes it obvious that you are aware of the counter-argument based on the Anthropic Principle. (Observation: uninteresting venues tend not to be populated by observers.) So, what is your real point?
1magfrump13y
I would think "Observers who find their surroundings interesting duplicate their observer-ness better" is an even-less-mind-bending anthropic-style argument. Also this keeps clear that "interesting" is more a property of observers than of places.
1TheOtherDave13y
(nods) Yeah, I would expect life forms that fail to be interested in the aspects of their surroundings that pertain to their ability to produce successful offspring to die out pretty quickly. That said, once you're talking about life forms with sufficiently general intelligences that they become interested in things not directly related to that, it starts being meaningful to talk about phenomena of more general interest. Of course, "general" does not mean "universal."
0Will_Sawin13y
If we have a prior of 100 to 1 against agent-caused universes, and .1% of non-agent universes have observers observing interestingness while 50% of agent-caused universes have it, what is the posterior probability of being in an agent-caused universe?
0Perplexed13y
I make it about 83% if you ignore the anthropic issues (by assuming that all universes have observers, or that having observers is independent of being interesting, for example). But if you want to take anthropic issues into account, you are only allowed to take the interestingness of this universe as evidence, not its observer-ladenness. So the answer would have to be "not enough data".
0Will_Sawin13y
You can't not be allowed to take the observer-ladenness of a universe as evidence. Limiting case: Property X is true of a universe if and only if it has observers. May we take the fact that observers exist in our universe as evidence that observers exist there?
0datadataeverywhere13y
I have no idea what probability should be assigned to non-agent universes having observers observing interesting things (though for agent universes, 50% seems too low), but I also think your prior is too high. I think there is some probability that there are no substantial universe simulations, and some probability that the vast majority of universes are simulations, but even if we live in a multiverse where simulated universes are commonplace, our particular universe seems like a very odd choice to simulate unless the basement universe is very similar to our own. I also assign a (very) small probability to the proposition that our universe is computationally capable of simulating universes like itself (even with extreme time dilation), so that also seems unlikely.
2Will_Sawin13y
Probabilities were for example purposes only. I made them up because they were nice to calculate with and sounded halfway reasonable. I will not defend them. If you request that I come up with my real probability estimates, I will have to think harder.
0datadataeverywhere13y
Ah, well your more general point was well-made. I don't think better numbers are really important. It's all too fuzzy for me to be at all confident about. I still retain my belief that it is implausible that we are in a universe simulation. If I am in a simulation, I expect that it is more likely that I am by myself (and that conscious or not, you are part of the simulation created in response to me), moderately more likely that there are a small group of humans being simulated with other humans and their environment dynamically generated, and overall very unlikely that the creators have bothered to simulate any part of physical reality that we aren't directly observing (including other people). Ultimately, none of these seem likely enough for me to bother considering for very long.
0jacob_cannell13y
The first part of your belief that "it is implausible that we are in a universe simulation" appears to be based on the argument: If simulationism, then solipsism is likely. Solipsism is unlikely, so . . . Chain of logic aside, simulationism does not imply solipsism. Simulating N localized space-time patterns in one large simulation can be significantly cheaper than simulating N individual human simulations. So some simulated individuals may exist in small solipsist sims, but the great majority of conscious sims will find themselves in larger shared simulations. Presumably a posthuman intelligence on earth would be interested in earth as a whole system, and would simulate this entire system. Simulating full human-mind equivalents is something of a sweet spot in the space of approximations. There is a massive sweet spot, an extremely effecient method, of simulating a modern computer - which is to simulate it at the level of it's turing equivalent circuit. Simulating it at a level below this - say at the molecular level, is just a massive waste of resources, while any simulation above this loses accuracy completely. It is postulated that a similar simulation scale separation exists for human minds, which naturally relates to uploads and AI.
1datadataeverywhere13y
I don't understand why human-mind equivalents are special in this regard. This seems very anthropocentric, but I could certainly be misinterpreting what you said. Cheaper, but not necessarily more efficient. It matters which answers one is looking for, or which goals one is after. It seems unlikely to me that my life is directed well enough to achieve interesting goals or answer interesting questions that a superintelligence might pose, but it seems even more unlikely that simulating 6 billion humans, in the particular way they appear (to me) to exist is an efficient way to answer most questions either. I'd like to stay away from telling God what to be interested in, but out of the infinite space of possibilities, Earth seems too banal and languorous to be the one in N that have been chosen for the purpose of simulation, especially if the basement universe has a different physics. If the basement universe matches our physics, I'm betting on the side that says simulating all the minds on Earth and enough other stuff to make the simulation consistent is an expensive enough proposition that it won't be worthwhile to do it many times. Maybe I'm wrong; there's no particular reason why simulating all of humanity in the year of 2011 needs to take more than 10^18 J, so maybe there's a "real" milky way that's currently running 10^18 planet-scale sims. Even that doesn't seem like a big enough number to convince me that we are likely to be one of those.
1jacob_cannell13y
I meant there is probably some sweet spot in the space of [human-mind] approximations, because of scale separation, which I elaborated on a little later with the computer analogy. Cheaper implies more efficient, unless the individual human simulations somehow have a dramatically higher per capita utility. A solipsist universe has extraneous patchwork complexity. Even assuming that all of the non-biological physical processes are grossly approximated (not unreasonable given current simulation theory in graphics), they still may add up to a cost exceeding that of one human mind. But of course a world with just one mind is not an accurate simulation, so you now you need to populate it with a huge number of pseudo-minds which functionally are indistinguishable from the perspective of our sole real observer but somehow use much less computational resources. Now imagine a graph of simulation accuracy vs computational cost of a pseudo-mind. Rather than being linear, I believe it is sharply exponential, or J-shaped with a single large spike near the scale separation point. The jumping point is where the pseudo-mind becomes a real actual conscious observer of it's own. The rationale for this cost model and the scale separation point can be derived from what we know about simulating computers. Perhaps not your life in particular, but human life on earth today? Simulating 6 billion humans will probably be the only way to truly understand what happened today from the perspective of our future posthuman descendants. The alternatives are . . . creating new physical planets? Simulation will be vastly more efficient than that. The basement reality is highly unlikely to have different physics. The vast majority of simulations we create today are based on approximations of currently understood physics, and I don't expect this to every change - simulations have utility for simulators. I'm a little confused about the 10^18 number. From what I recall, at the limits of computa
2datadataeverywhere13y
I understand that for any mind, there is probably an "ideal simulation level" which has the fidelity of a more expensive simulation at a much lower cost, but I still don't understand why human-mind equivalents are important here. Which seems pretty reasonable to me. Why should the value of simulating minds be linear rather than logarithmic in the number of minds? Agreed, but I also think that the cost of simulating the relevant stuff necessary to simulate N minds might be close to linear in N. I agree, though as a minor note if cost is the Y-axis the graph has to have a vertical asymptote, so it has to grow much faster than exponential at the end. Regardless, I don't think we can be confident that consciousness occurs at an inflection point or a noticeable bend. I suspect that some pseudo-minds must be conscious observers some of the time, but that they can be turned off most of the time and just be updated offline with experiences that their conscious mind will integrate and patch up without noticing. I'm not sure this would work with many mind-types, but I think it would work with human minds, which have a strong bias to maintaining coherence, even at the cost of ignoring reality. If I'm being simulated, I suspect that this is happening even to me on a regular basis, and possibly happening much more often the less I interact with someone. Updating on the condition that we closely match the ancestors of our simulators, I think it's pretty reasonable that we could be chosen to be simulated. This is really the only plausible reason I can think of to chose us in particular. I'm still dubious as to the value doing so will have to our descendants. Actually, I made a mistake, so it's reasonable to be confused. 20 W seems to be a reasonable upper limit to the cost of simulating a human mind. I don't know how much lower the lower bound should be, but it might not be more than an order of magnitude less. This gives 10^11 W for six billion, (4x) 10^18 J for one year.
1jacob_cannell13y
Simply because we are discussing simulating the historical period in which we currently exist. The premise of the SA is that the posthuman 'gods' will be interested in simulating their history. That history is not dependent on a smattering of single humans isolated in boxes, but the history of the civilization as a whole system. If the N minds were separated by vast gulfs of space and time this would be true, but we are talking about highly connected systems. Imagine the flow of information in your brain. Imagine the flow of causality extending back in time, the flow of information weighted by it's probabilistic utility in determining my current state. The stuff in immediate vicinity to me is important, and the importance generally falls off according to an inverse square law with distance away from my brain. Moreover, even from the stuff near me at one time step, only a tiny portion of it is relevant. At this moment my brain is filtering out almost everything except the screen right in front of me, which can be causally determined by a program running on my computer, dependent on recent information in another computer in a server somewhere in the midwest a little bit ago, which was dependent on information flowing out from your brain previously . .. and so on. So simulating me would more or less require your simulation as well, it's very hard to isolate a mind. You might as well try to simulate just my left prefrontal cortex. The entire distinction of where one mind begins and ends is something of spatial illusion that disappears when you map out the full causal web. If you want to simulate some program running on one computer on a new machine, there is an exact vertical inflection wall in the space of approximations where you get a perfect simulation which is just the same program running on the new machine. This simulated program is in fact indistinguishable from the original. Yes, but because of the network effects mentioned earlier it would be difficult
-2datadataeverywhere13y
This is very upsetting, I don't have anything like the time I need to keep participating in this thread, but it remains interesting. I would like to respond completely, which means that I would like to set it aside, but I'm confident that if I do so I will never get back to it. Therefore, please forgive me for only responding to a fraction of what you're saying. I thought context made it clear that I was only talking about the non-mind stuff being simulated as being an additional cost perhaps nearly linear in N. Very little of what we directly observe overlaps except our interaction with each other, and this was all I was talking about. Why can't a poor model (low fidelity) be conscious? We just don't know enough about consciousness to answer this question. I really disagree, but I don't have time to exchange each other's posteriors, so assume this dropped. I think this is evil, but I'm not willing to say whether the future intelligences will agree or care. I said it was a reasonable upper bound, not a reasonable lower bound. That seems trivial. Most importantly, you're assuming that all circuitry performs computation, which is clearly impossible. That leaves us to debate about how much of it can, but personally I see no reason that the computational minimum cost will closely (even in an exponential sense) be approached. I am interested in your reasoning why this should be the case though, so please give me what you can in the way of references that led you to this belief. Lastly, but most importantly (to me), how strongly do you personally believe that a) you are a simulation and that b) all entities on Earth are full-featured simulations as well? Conditioning on (b) being true, how long ago (in subjective time) do you think our simulation started, and how many times do you believe it has (or will be) replicated?
2jacob_cannell13y
If I was to quantify your 'very little' I'd guess you mean say < 1% observational overlap. Lets look at the rough storage cost first. Ignoring variable data priority through selective attention for the moment, the data resolution needs for a simulated earth can be related to photons incident on the retina and decreases with an inverse square law from the observer. We can make a 2D simplification and use google earth as an example. If there was just one 'real' observer, you'd need full data fidelity for the surface area that observer would experience up close during his/her lifetime, and this cost dominates. Let's say that's S, S ~ 100 km^2. Simulating an entire planet, the data cost is roughly fixed or capped - at 5x10^8 km^2. So in this model simulating an entire earth with 5 billion people will have a base cost of 5x10^8 km^2, and simulating 5 billion worlds separately will have a cost of 5x10^9 * S. So unless S is pathetically small (actually less than human visual distance), this implies a large extra cost to the solipsist approach. From my rough estimate of S the solipsist approach is 1,000 times more expensive. This also assumes that humans are randomly distributed, which of course is unrealistic. In reality human populations are tightly clustered which further increases the relative gain of shared simulation. Evil? Why? I'm not sure what you mean by this. Does all of the circuitry of the brain perform computation? Over time, yes. The most efficient brain simulations will of course be emulations - circuits that are very similar to the brain but built on much smaller scales on a new substrate. My main reference for the ultimate limits is Seth Lloyd's "Ultimate Physical Limits of Computation". The Singularity is Near discusses much of this as well of course (but he mainly uses the more misleading ops per second, which is much less well defined). Biological circuits switch at 10^3 to 10^4 bits flips/second. Our computers went from around that speed in W
2datadataeverywhere13y
I feel like this would make you a terrible video game designer :-P. Why should we bother simulating things in full fidelity, all the time, just because they will eventually be seen? The only full-fidelity simulation we should need is the stuff being directly examined. Much rougher algorithms should suffice for things not being directly observed. Heh, my ability to argue is getting worse and worse. You sure you want to continue this thread? What I meant to say (and entirely failed) is that there is an infrastructure cost; we can't expect to compute with every particle, because we need lots of particles to make sure the others stay confined, get instructions, etc. Basically, not all matter can be a bit at the same time. Again, infrastructure costs. Can you source this (also Lloyd?)? For the rest, I'm aware of and don't dispute the speeds and densities you mention. What I'm skeptical of is that we have evidence that they are practicable; this was what I was looking for. I don't count previous success of Moore's Law strong evidence of that we will continue getting better at computation until we hit physical limits. I'm particularly skeptical about how well we will ever do on power consumption (partially because it's such a hard problem for us now). The idea that I did not have to live this life, that some entity or civilization has created the environment in which I've experienced so much misery, and that they will do it again and again makes me shake with impotent rage. I cannot express how much I would rather having never existed. The fact that they would do this and so much worse (because my life is an astoundingly far cry from the worst that people deal with), again, and again, to trillions upon trillions of living, feeling beings...I cannot express my sorrow. It literally brings me to tears. This is not sadism; or it would be far worse. It is rather a total neglect of care, a relegation of my values in place of historical interest. However, I still consider th
0jacob_cannell13y
Of course you're on the right track here - and I discussed spatially variant fidelity simulation earlier. The rough surface area metric was a simplification of storage/data generation costs, which is a separate issue than computational cost. If you want the most bare-bones efficient simulation, I imagine a reverse hierarchical induction approach that generates the reality directly from the belief network of the simulated observer, a technique modeled directly on human dreaming. However, this is only most useful if the goal is to just generate an interesting reality. If the goal is to regenerate an entire historical period accurately, you cant start with the simulated observers - they are greater unknowns than the environment itself. The solipsist issue may not have discernible consequences, but overall the computational scaling is sublinear for emulating more humans in a world and probably significant because of the large casual overlap of human minds via language. Physical Limits of Computation The intellectual work required to show an ultimate theoretical limit is tractable, but showing that achieving said limit is impossible in practice is very difficult. I'm pretty sure we won't actually hit the physical limits exactly, it's just a question of how close. If you look at our historical progress in speed and density to date, it suggests that we will probably go most of the way. Another simple assessment related to the doomsday argument: I don't know how long this Moore's Law progression will carry on, but it's lasted for 50 years now, so I give reasonable odds that it will last another 50. Simple, but surprisingly better than nothing. A more powerful line of reasoning perhaps is this: as long as there is an economic incentive to continue Moore's Law and room to push against the physical limits, ceteris paribus, we will make some progress and push towards those limits. Thus, eventually we will reach them. Power density depends on clock rate, which has plate
2datadataeverywhere13y
I believe we're arguing along two paths here, and it is getting muddled. Applying to both, I think one can maintain the world-per-person sim much more cheaply than you originally suggested long before one hits the spot where the sim is no longer accurate to the world except where it intersects with the observer's attention. Second, from my perspective you're begging the question, since I was talking about a variety of reasons for simulation and arguing that simulating a single entity seems as reasonable as many---but you seem only to be concerned with historical recreation, in which case it seems obvious to me that a large group of minds is necessary. If we're only talking about that case, the arguments along this line about the per-mind cost just aren't very relevant. I have a 404 on your link, I'll try later. Interesting, I haven't heard that argument applied to Moore's Law. Question: you arrive at a train crossing (there are no other cars on the road), and just as you get there, a train begins to cross before you can. Something goes wrong, and the train stops, and backs up, and goes forward, and stops again, and keeps doing this. (This actually happened to me). 10 minutes later, should you expect that you have around 10 minutes left? After those are passed, should your new expectation be that you have around 20 minutes left? The answer is possibly yes. I think better results would be obtained by using a Jeffreys Prior. However, I've talked to a few statisticians about this problem, and no one has given me a clear answer. I don't think they're used to working with so little data. Revise to say "and room to push against the practicable limits" and you will see where my argument lies despite my general agreement with this statement. To my knowledge, this is incorrect. Increases in transistor density have dramatically increased circuit leakage (because of bumping into quantum tunneling), requiring more power per transistor in order to accurately distinguish one
2jacob_cannell13y
Historical recreation currently seems to be the best rationale for a superintelligence to simulate this timeslice, although there are probably other motivations as well. If that was actually the case, then there would be no point to moving to a new technology node! Yes leakage is a problem at the new tech nodes, but of course power per transistor can not possibly be increasing. I think you mean power per surface area has increased. Shrinking a circuit by half in each dimension makes the wires thinner, shorter and less resistant, decreasing power use per transistor just as you'd think. Leakage makes this decrease somewhat less than the shrinkage rate, but it doesn't reverse the entire trend. There are also other design trends that can compensate and overpower this to an extent, which is why we have a plethora of power efficient circuits in the modern handheld market. "which mentioned that the increased waste heat from modern circuits was rising at a faster exponential than circuit density" Do you remember when this was from or have a link? I could see that being true when speeds were also increasing, but that trend has stopped or reversed. I recall seeing some slides from NVidia where they are claiming there next GPU architecture will cut power use per transistor dramatically as well at several times the rate of shrinkage. Even if the goal is maximizing fun, creating some historical sims for the purpose of resurrecting the dead may serve that goal. But I really doubt that current-human-fun-maximization is an evolutionary stable goal system. I imagine that future posthuman morality and goals will evolve into something quite different. Knowledge is a universal feature of intelligence. Even the purely mathematical hypothetical superintelligence AIXI would end up creating tons of historical simulations - and that might be hopelessly brute force, but nonetheless superintelligences with a wide variety of goal systems would find utility in various types of simulat
3Desrtopa13y
Much of the information from the past is probably irretrievably lost to us. If the information input into the simulation were not precisely the same as the actual information from that point in history, the differences would quickly propagate so that the simulation would bear little resemblance to the history. Supposing the individuals in question did have access to all the information they'd need to simulate the past, they'd have no need for the simulation, because they'd already have complete informational access to the past. It suffers similar problems to your sandboxed anthropomorphic AI proposal; provided you have all the resources necessary to actually do it, it ceases to be a good idea. There are other possible motivations, but it's not clear that there are any others that are as good or better, so we have little reason to suppose it will ever happen.
2datadataeverywhere13y
This seems to be overly restrictive, but I don't mind confining the discussion to this hypothesis. Yes, you are correct. The roundtable was at SC'08, a while after speeds had stabilized, and since it is a supercomputing conference, the focus was on massively parallel systems. It was part of this. Without needing to dispute this, I can remain exceptionally upset that whatever their future morality is, it is blind to suffering and willing to create innumerable beings that will suffer in order to gain historical knowledge. Does this really not bother you in the slightest? ETA: still 404
0jacob_cannell13y
While the leakage issue is important and I want to read a little more about this reference, I don't think that any single such current technical issue is nearly sufficient to change the general analysis. There have always been major issues on the horizon, the question is more of the increase in engineering difficulty as we progress vs the increase in our effective intelligence and simulation capacity. In the specific case of leakage, even if it is a problem that persists far into the future, it just slightly lowers the growth exponent as we just somewhat lower the clock speeds. And even if leakage can never be fully prevented, eventually it itself can probably be exploited for computation. As I child I liked Mcdonalds, bread, plain pizza and nothing more - all other foods were poisonous. I was convinced that my parent's denial of my right to eat these wonderful foods and condemn me to terrible suffering as a result was a sure sign of their utter lack of goodness. Imagine if I could go back and fulfill that child's wish to reduce it's suffering. It would never then evolve into anything like my current self, and in fact may evolve into something that would suffer more or at the very least wish that it could be me. Imagine if we could go back in time and alter our primate ancestors to reduce their suffering. The vast majority of such naive interventions would cripple their fitness and wipe out the lineage. There is probably a tiny set of sophisticated interventions that could simultaneously eliminate suffering and improve fitness, but these altered creatures would not develop into humans. Our current existence is completely contingent on a great evolutionary epic of suffering on an astronomical scale. But suffering itself is just one little component of that vast mechanism, and forms no basis from which to judge the totality. You made the general point earlier, which I very much agree with, about opportunity cost. Simulating humanity's current time-line has an op
3datadataeverywhere13y
I disagree; I think that problems like this, unresolved, may or may not decrease the base of our exponent, but will cap its growth earlier. On this point, we disagree, and I may be on the unpopular side of this agreement. I don't see how past increases that have required technological revolutions can be considered more than weak evidence for future technological revolutions. I actually think it quite likely that increase in computational power per Joule will bottom out in ten to twenty years. I wouldn't be too surprised if exponential increase lasts thirty years, but forty seems unlikely, and fifty even less likely. I don't care. We aren't talking about destroying the future of intelligence by going back in time. We're talking about repeating history umpteen many times, creating suffering anew each time. It sounds to me like you are insisting that this suffering is worthwhile, even if the result of all of it will never be more than a data point in a historian's database. We live in a heartbreaking world. Under the assumption that we are not in a simulation, we can recognize facts like 'suffering is decreasing over time' and realize that it is our job to work to aid this progress. Under the assumption that we are in a simulation, we know that the capacity for this progress is already fully complete, and the agents who control it simply don't care. If we are being simulated, it means that one or more entities have chosen to create unimaginable quantities of suffering for their own purposes---to your stated belief, for historical knowledge. Your McDonald's example doesn't address this in the slightest. You were already a living, thinking being, and your parents took care of you in the right way in an attempt to make your future life better. They couldn't have chosen before you were born to instead create someone who would be happier, smarter, wiser, and better in every way. If they could have, wouldn't it be upsetting that they chose not to? Given the choice betwe
4Dreaded_Anomaly13y
A person running such a simulation could create a simulated afterlife, without suffering, where each simulated intelligence would go after dying in the simulated universe. It's like a nice version of Pascal's Wager, since there's no wagering involved. Such an afterlife wouldn't last infinitely long, but it could easily be made long enough to outweigh any suffering in the simulated universe.
2Desrtopa13y
Or you could skip the part with all the suffering. That would be a lot easier.
2Dreaded_Anomaly13y
In general, I agree. I just wanted to offer a more creative alternative for someone truly dedicated to operating such a simulation.
0Desrtopa13y
So far the only person who seems dedicated to making such a simulation is jacob cannell, and he already seems to be having enough trouble separating the idea from cached theistic assumptions.
1Alicorn13y
I don't think that's how it works.
3Dreaded_Anomaly13y
How much future happiness would you need in order to choose to endure 50 years of torture?
1nshepperd13y
That depends if happiness without torture is an option. The options are better/worse, not good/bad.
1jimrandomh13y
The simulated afterlife wouldn't need to outweigh the suffering in the first universe according to our value system, only according to the value system of the aliens who set up the simulation.
-2jacob_cannell13y
Technology doesn't really advance through 'revolutions', it evolves. Some aspects of that evolution appear to be rather remarkably predictable. That aside, the current predictions do posit a slow-down around 2020 for the general lithography process, but there are plenty of labs researching alternatives. As the slow-down approaches, their funding and progress will accelerate. But there is a much more fundamental and important point to consider, which is that circuit shrinkage is just one dimension of improvement amongst several. As that route of improvement slows down, other routes will become more profitable. For example, for AGI algorithms, current general purpose CPUs are inefficient by a factor of perhaps around 10^4. That is a decade of exponential gain right there just from architectural optimization. This route - neuromorphic hardware and it's ilk - currently receives a tiny slice of the research budget, but this will accelerate as AGI advances and would accelerate even more if the primary route of improvement slowed. Another route of improvement is exponentially reducing manufacturing cost. The bulk of the price of high-end processors pays for the vast amortized R&D cost of developing the manufacturing node within the timeframe that the node is economical. Refined silicon is cheap and getting cheaper, research is expensive. The per transistor cost of new high-end circuitry on the latest nodes for a CPU or GPU is 100 times more expensive than the per transistor cost of bulk circuitry produced on slightly older nodes. So if moore's law stopped today, the cost of circuitry would still decay down to the bulk cost. This is particularly relevant to neurmorphic AGI designs as they can use a mass of cheap repetitive circuitry, just like the brain. So we have many other factors that will kick in even as moore's law slows. I suspect that we will hit a slow ramping wall around or by 2020, but these other factors will kick in and human-level AGI will ramp up, and t
4Desrtopa13y
If we had enough information to create an entire constructed reality of them in simulation, we'd have much more than we needed to just go ahead and intervene. Some people would argue that it shouldn't (this is an extreme of negative utilitarianism.) However, since we're in no position to decide whether the universe gets to exist or not, the dispute is fairly irrelevant. If we're in a position to decide between creating a universe like ours, creating one that's much better, with more happiness and productivity and less suffering, and not creating one at all, though, I would have an extremely poor regard for the morality of someone who chose the first. If my descendants think that all my suffering was worthwhile so that they could be born instead of someone else, then you know what? Fuck them. I certainly have a higher regard for my own ancestors. If they could have been happier, and given rise to a world as good as better than this one, then who am I to argue that they should have been unhappy so I could be born instead? If, as you point out then why not skip the historical recreation and go straight to simulating the paradises?
4JoshuaZ13y
I'm curious how you've reached this conclusion given how little we know about what AGI algorithms would look like.
1jacob_cannell13y
The particular type of algorithm is actually not that important. There is a general speedup in moving from a general CPU-like architecture to a specialized ASIC - once you are willing to settle on the algorithms involved. There is another significant speedup moving into analog computation. Also, we know enough about the entire space of AI sub-problems to get a general idea of what AGI algorithms look like and the types of computations they need. Naturally the ideal hardware ends up looking much more like the brain than current von neumann machines - because the brain evolved to solve AI problems in an energy efficient manner. If you know your are working in the space of probabilistic/bayesian like networks, exact digital computations are extremely wasteful. Using ten or hundreds of thousands of transistors to do an exact digital multiply is useful for scientific or financial calculations, but it's a pointless waste when the algorithm just needs to do a vast number of probabilistic weighted summations, for example.
3gwern13y
Cite for last paragraph about analog probability: http://phm.cba.mit.edu/theses/03.07.vigoda.pdf
2jacob_cannell13y
Thanks. Hefty read, but this one paragraph is worth quoting: I had forgot that term, statistical inference algorithms, need to remember that.
3gwern13y
Well, there's also another quote worth quoting, and in fact the quote that is in my Mnemosyne database and which enabled me to look that thesis up so fast...
2jacob_cannell13y
This is true in general but this particular statement appears out of date: 'Alternative computing architectures, such as parallel digital computers have not tended to be commercially viable" That was true perhaps circa 2000, but we hit a speed/heat wall and since then everything has been going parallel. You may see something similar happen eventually with analog computing once the market for statistical inference computation is large enough and or we approach other constraints similar to the speed/heat wall.
1JoshuaZ13y
Ok. But this prevents you from directly improving your algorithms. And if the learning mechanisms are to be highly flexible (like say those of a human brain) then the underlying algorithms may need to modify a lot even to just approximate being an intelligent entity. I do agree that given a fixed algorithm this would plausibly lead to some speed-up. A lot of things can't be put into analog. For example, what if you need factor large numbers. And making analog and digital stuff interact is difficult. This doesn't follow. The brain evolved through a long path of natural selection. It isn't at all obvious that the brain is even highly efficient at solving AI-type problems, especially given that humans have only needed to solve much of what we consider standard problems for a very short span of evolutionary history (and note that general mammal brain architecture looks very similar to ours).
-2jacob_cannell13y
EDIT: why the downvotes? Yes - which is part of the reason there is a big market for CPUs. Not necessarily. For example, the cortical circuit in the brain can be reduced to an algorithm which would include the learning mechanism built in. The learning can modify the network structure to a degree but largely adjusts synaptic weights. That can be described as (is equivalent to) a single fixed algorithm. That algorithm in turn can be encoded into an efficient circuit. The circuit would learn just as the brain does, no algorithmic changes ever needed past that point, as the self-modification is built into the algorithm. A modern CPU is a jack-of all trades that is designed to do many things, most of which have little or nothing to do with the computational needs of AGI. If the AGI need to factor large numbers, it can just use an attached CPU. Factoring large numbers is easy compared to reading this sentence about factoring large numbers and understanding what that actually means. The brain has roughly 10^15 noisy synapses that can switch around 10^3 times per second and store perhaps a bit each as well. (computation and memory integrated) My computer has about 10^9 exact digital transistors in it's CPU & GPU that can switch around 10^9 times per second. It has around the same amount of separate memory and around 10^13 bits of much slower disk storage. These systems have similar peak throughputs of about 10^18 bits/second, but they are specialized for very different types of computational problems. The brain is very slow but massively wide, the computer is very narrow but massively fast. The brain is highly specialized and extremely adept at doing typical AGI stuff - vision, pattern recognition, inference, and so on - problems that are suited to massively wide but slow processing with huge memory demands. Our computers are specialized and extremely adept at doing the whole spectrum of computational problems brains suck at - problems that involve long complex cha
4JoshuaZ13y
This limits the amount of modification one can do. Moreover, the more flexible your algorithm the less you gain from hard-wiring it. No, we don't know that the brain is "extremely adept" at these things. We just know that it is better than anything else that we know of. That's not at all the same thing. The brain's architecture is formed by a succession of modifications to much simpler entities. The successive, blind modification has been stuck with all sorts of holdovers from our early chordate ancestors and a lot from our more recent ancestors. Easy is a misleading term in this context. I certainly can't factor a forty digit number but for a computer that's trivial. Moreover, some operations are only difficult because we don't know an efficient algorithm. In any event, if your speedup is only occuring for the narrow set of tasks which humans can do decently such as vision, then you aren't going to get a very impressive AGI. The ability to engage in face recognition if it takes you only a tiny amount of time that it would for a person to do is not an impressive ability.
-1jacob_cannell13y
Limits it compared to what?. Every circuit is equivalent to a program. The circuit of a general processor is equivalent to a program which simulates another circuit - the program which it keeps in memory. Current Von Neumman processors are not the only circuits which have this simulation-flexibility. The brain has similar flexibility using very different mechanisms. Finally, even if we later find out that lo and behold, the inference algorithm we hard-coded into our AGI circuits was actually not so great, and somebody comes along with a much better one . . . that is still not an argument for simulating the algorithm in software. Not at all true. The class of statistical inference algorithms including Bayesian Networks and the cortex are both extremely flexible and greatly benefit from 'hard-wiring' it. This is like saying we don't know that Usain Bolt is extremely adept at running, he's just better than anything else that we know of. The latter sentence in each case of course is true, but it doesn't impinge on the former. But my larger point was that the brain and current computers occupy two very different regions in the space of possible circuit designs, and are rather clearly optimized for a different slice over the space of computational problems. There are some routes that we can obviously improve on the brain at the hardware level. Electronic circuits are orders of magnitude faster, and eventually we can make them much denser and thus much more massive. However, it is much more of an open question in computer science if we will ever be able to greatly improve on the statistical inference algorithm used in the cortex. It is quite possible that evolution had enough time to solve that problem completely - or at least reach some nearly global maxima. Yes - this is an excellent strategy for solving complex optimization problems. Yes, and on second thought - largely mistaken. To be more precise we should speak of computational complexity and bitops. The bes
5JoshuaZ13y
If we have many generations of rapid improvement of the algorithms this will be much easier if one doesn't need to make new hardware each time. The general trend should still occur this way. I'm also not sure that you can reach that conclusion about the cortex given that we don't have a very good understanding of how the brain's algorithms function. That seems plausibly correct but we don't actually know that. Given how much humans rely on vision it isn't at all implausible that there have been subtle genetic tweaks that make our visual regions more effective in processing visual data (I don't know the literature in this area at all). Incorrect, the best factoring algorithms are subexponential. See for example the quadratic field sieve and the number field sieve both of which have subexponential running time. This has been true since at least the early 1980s (there are other now obsolete algorithms that were around before then that may have had slightly subexponential running time. I don't know enough about them in detail to comment.) Factoring primes is always easy. For any prime p, it has no non-trivial factorizations. You seem to be confusing factorization with primality testing. The second is much easier than the first; we've had Agrawal's algorithm which is provably polynomial time for about a decade. Prior to that we had a lot of efficient tests that were empirically faster than our best factorization procedures. We can determine the primality of numbers much larger than those we can factor. Really? The general number field sieve is simple and short? Have you tried to understand it or write an implementation? Simple and short compared to what exactly? There are some tasks where we can argue that humans are doing a good job by comparison to others in the animal kingdom. Vision is a good example of this (we have some of the best vision of any mammal.) The rest are tasks which no other entities can do very well, and we don't have any good reason to think hu
4wnoise13y
To clarify, subexponential does not mean polynomial, but super-polynomial. (Interestingly, while factoring a given integer is hard, there is a way to get a random integer within [1..N] and its factorization quickly. See Adam Kalai's paper Generating Random Factored Numbers, Easily (PDF).
0JoshuaZ13y
Interesting. I had not seen that paper before. That's very cute.
2Sniffnoy13y
This is mostly irrelevant, but think complexity theorists use a weird definition of exponential according to which GNFS might still be considered exponential - I know when they say "at most exponential" they mean O(e^(n^k)) rather than O(e^n), so it seems plausible that by "at least exponential" they might mean Omega(e^(n^k)) where now k can be less than 1. EDIT: Nope, I'm wrong about this. That seems kind of inconsistent.
2wnoise13y
They like keeping things invariant under polynomial transformations of the input, since that's has been observed to be a somewhat "natural" class. This is one of the areas where it seems to not quite.
0JoshuaZ13y
Hmm, interesting in the notation that Scott says is standard to complexity theory my earlier statement that factoring is "subexponential" is wrong even though it is slower growing than exponential. But apparently Greg Kuperberg is perfectly happy labeling something like 2^(n^(1/2)) as subexponential.
0jacob_cannell13y
Yes, and this tradeoff exists today with some rough mix between general processors and more specialized ASICs. I think this will hold true for a while, but it is important to point out a few subpoints: 1. If moore's law slows down this will shift the balance farther towards specialized processors. 2. Even most 'general' processors today are actually a mix of CISC and vector processing, with more and more performance coming from the less-general vector portion of the chip. 3. For most complex real world problems algorithms eventually tend to have much less room for improvement than hardware - even if algorithmic improvements intially dominate. After a while algorithmic improvements end within the best complexity class and then further improvements are just constants and are swamped by hardware improvement. Modern GPUs for example have 16 or more vector processors for every general logic processor. The brain is like a very slow processor with massively wide dedicated statistical inference circuitry. As a result of all this (and the point at the end of my last post) I expect that future AGIs will be built out of a heterogeneous mix of processors but with the bulk being something like a wide-vector processor with alot of very specialized statistical inference circuitry. This type of design will still have huge flexibility by having program-ability at the network architecture level - it could for example simulate humanish and various types of mammalian brains as well as a whole range of radically different mind architectures all built out of the same building blocks. We have pretty good maps of the low-level circuitry in the cortex at this point and it's clearly built out of a highly repetitive base circuit pattern, similar to how everything is built out of cells at a lower level. I don't have a single good introductory link, but it's called the laminar cortical pattern. Yes, there are slight variations, but slight is the keyword. The cortex is highly genera
1JoshuaZ13y
How do you reconcile this claim with the fact that some people are faceblind from an early age and never develop the ability to recognize faces? This would suggest that there's at least one aspect of humans that is normally somewhat hard-wired.
6jacob_cannell13y
I've read a great deal about the cortex, and my immediate reaction to your statement was "no, that's just not how it works". (strong priors) About one minute later on the Prosopagnosia wikipedia article, I find the first reference to this idea (that of congenital Prosopagnosia): The idea of congenital prosopagnosia appears to be a new theory supported by one researcher and one? study: The last part about it being "commonly accompanied by other forms of visual agnosia" gives it away - this is not anything close to what you originally thought/claimed, even if this new research is actually correct. Known cases of true prosopagnosia are caused by brain damage - what this research is describing is probably a disorder of the higher region (V4 I believe) which typically learns to recognize faces and other complex objects. However, there is an easy way to cause prosopagnosia during development - prevent the creature from ever seeing faces. I dont have the link on hand, but there have been experiments in cats where you mess with their vision - by using grating patterns or carefully controlled visual environments, and you can create cats that literally can't even see vertical lines. So even the simplest most basic thing which nature could hard-code - a vertical line feature detector, actually develops from the same extremely flexible general cortical circuit - the same circuit which can learn to represent everything from sounds to quantum mechanics. Humans can represent a massive number of faces, and in general the brain's vast information storage capacity over the genome (10^15 ish vs 10^9 ish) more or less require a generalized learning circuit. The cortical circuits do basically nothing but fire randomly when you are born - you really are a blank slate in that respect (although obviously the rest of the brain has plenty of genetically fixed functionality). Of course the arrangement of the brain's regions with respect to sensory organs and it's overall wiring arch
3wedrifid13y
There are all sorts of aspects of humans that are normally somewhat - or nearly entirely - hard-wired. The cortex just doesn't tend to be. Even the parts of the cortex that are similarly specialised in most humans seem to be so due to what they are connected to. (As can be seen by looking at how the atypical cases have adapted differently.) It would surprise me if the inability to recognise faces was caused by a dysfunction in the cortex specifically. Disclaimer: I disagree with nearly everything else Jacob has said in this thread. This position specifically appears to be well researched.
4JoshuaZ13y
This is unlikely. We haven't been selected based on sheer brain power or brain inefficiency. Humans have been selected by their ability to reproduce in a complicated environment. Efficient intelligence helps, but there's selection for a lot of other things, such as good immune systems and decent muscle systems. A lot of the selection that was brain selection was probably simply around the fantastically complicated set of tasks involved in navigating human societies. Note that human brain size on average has decreased over the last 50,000 years. Humans are subject to a lot of different selection pressures. (Tangent: This is related to how at a very vague level we should expect genetic algorithms to outperform evolution at optimizing tasks. Genetic algorithms can select for narrow task completion goals, rather than select in a constantly changing environment with competition and interaction between the various entities being bred.)
1jacob_cannell13y
I largely agree with your point about human evolution, but my point was about the laminar cortical circuit which is shared in various forms across the entire mammalian lineage and has an analog in birds. It's a building block pattern that appears to have a long evolutionary history. Yes, but there is a limit to this of course. We are, after all, talking about general intelligence.
1Desrtopa13y
It seems you're arguing that our successors will develop a preference for simulating universes like ours over paradises. If that's what you're arguing, then what reason do we have to believe that this is probable? If their preferences do not change significantly from ours, it seems highly unlikely that they will create simulations identical to our current existence. And out of the vast space of possible ways their preferences could change, selecting that direction in the absence of evidence is a serious case of privileging the hypothesis.
0Desrtopa13y
To uploads, yes, but a faithful simulation of the universe, or even a small portion of it. would have to track a lot more variables than the processes of the human minds within it.
-2jacob_cannell13y
Optimal approximate simulation algorithms are all linear with respect to total observer sensory input. This relates to the philosophical issue of observer dependence in QM and whether or not the proverbial unobserved falling tree actually exists. So the cost of simulating a matrix with N observers is not expected to be dramatically more than simulating the N observer minds alone - C*N. The phenomena of dreams is something of a practical proof.
0Desrtopa13y
Variables that aren't being observed still have to be tracked, since they affect the things that are being observed. Dreams are not a very good proof of concept given that they are not coherent simulations of any sort of reality, and can be recognized as artificial not only after the fact, but during with a bit of introspection and training. In dreams, large amounts of data can be omitted or spontaneously introduced without the dreamer noticing anything is wrong unless they're lucid. In reality, everything we observe can be examined for signs of its interactions with things that we haven't observed, and that data adds up to pictures that are coherent and consistent with each other.
2prase13y
Depends on personal standards of interest. I may be more interested in questions which I can imagine answering than ones whose anwer is a matter of speculation, even if the first class refers to small unimportant objects while the second speaks about the whole universe. Practically, finding out teapots orbiting Venus would have more tangible consequences than realising that "universe was caused by an agenty process" is true (when further properties of the agent remain unspecified). The feeling of grandness associated with learning the truth about the very beginning of the universe, when the truth is so vague that all anticipated expectations remain the same as before, doesn't count in my eyes. Even if you forget heaven, hell, souls, miracles, prayer, religious morality and plethora of other things normally associated with theism (which I don't approve because confusion inevitably appears when words are redefined), and leave only "universe was created by an agenty process" (accepting that "universe" has some narrower meaning than "everything which exists"), you have to point out how can we, at least theoretically, test it. Else, it may not be closed for being definitely false, but still would be closed for being uninteresting.
2Dreaded_Anomaly13y
"Interesting" is subjective, and further, I think you overestimate how many interesting things we actually know to be caused by "agenty things." Phenomena with non-agenty origins include: any evolved trait or life form (as far as we have seen), any stellar/astronomical/geological body/formation/event...
3mkehrt13y
It is pretty likely you are correct, but this is probably the best example of question-begging I have ever seen.
1gjm13y
All Dreaded_Anomaly needs for the argument I take him or her to be making is that those things are not known to be caused by "agenty things". More precisely: Will Newsome is arguing "interesting things tend to be caused by agents", which is a claim he isn't entitled to make before presenting some (other) evidence that (e.g.) trees and clouds and planets and elephants and waterfalls and galaxies are caused by agents.
0Dreaded_Anomaly13y
It seems to me that basing such a list on evidence-based likelihood is different than basing it on mere assumption, as begging the question would entail. I do see how it fits the definition from a purely logical standpoint, though.
0Will_Newsome13y
Interestingness is objective enough to argue about. (Interestingly enough, that is the very paper that eventually led me to apply for Visiting Fellowship at SIAI.) I think that the phenomena you listed are not nearly as interesting as macroeconomics, nuclear bombs, genetically engineered corn, supercomputers, or the singularity. Edit: I misunderstood the point of your argument. Going back to responding to your actual argument... I still contend that we live in a very improbably interesting time, i.e. on the verge of a technological singularity. Nonetheless this is contentious and I haven't done the back of the envelope probability calculations yet. I will try to unpack my intuitions via arithmetic after I have slept. Unfortunately we run into anthropic reference class problems and reality fluid ambiguities where it'll be hard to justify my intuitions. That happens a lot.
8topynate13y
All of those phenomena are caused by human action! Once you know humans exist, the existence of macroeconomics is causally screened off from any other agentic processes. All of those phenomena, collectively, aren't any more evidence for the existence of an intelligent cause of the universe than the existence of humans: the existence of such a cause and the existence of macroeconomics are conditionally independent events, given the existence of humans.
-1Will_Newsome13y
Right, I was responding to Dreaded_Anomaly's argument that interesting things tend not to be caused by agenty things, which was intended as a counterargument to my observation that interesting things tend to be caused by agenty things. The exchange was unrelated to the argument about the relatively (ab)normal interestingness of this universe. I think that is probably the reason for the downvotes on my comment, since without that misinterpretation it seems overwhelmingly correct. Edit: Actually, I misinterpreted the point of Dreaded_Anomaly's argument, see above.
5Dreaded_Anomaly13y
I'm not sure how an especially interesting time (improbable or otherwise) occurring ~13.7 billion years after the universe began implies the existence of God.
2DSimon13y
Ack! Watch out for that most classic of statistical mistakes: seeing something interesting happen, going back and calculating the probability of that specific thing (rather than interesting things in general!) having happened, seeing that that probability is small, and going "Ahah, this is hardly likely to have happened by chance, therefore there's probably something else involved."
0datadataeverywhere13y
In this case, I think Fun Theory specifies that there are an enormous number of really interesting things, each of minuscule individual probability, but highly likely as an aggregate.
-1Will_Newsome13y
Of course. Good warning though.
7Jack13y
The existence of the universe is actually very strong evidence in favor of theism. It just isn't nearly strong enough to overcome the insanely low prior that is appropriate.
-1jacob_cannell13y
Evidence allows one to dissociate theories and rule out those incompatible with observational history. The best current fit theory to our current observational history is the evolution of the universe from the Big Bang to now according to physics. If you take that theory it also rather clearly shows a geometric acceleration of local complexity and predicts (vaguely) Singularity-type events as the normal endpoints of technological civilizations. Thus the theory also necessarily predicts not one universe, but an entire set of universes embedded in a hierarchy starting with a physical parent universe. Our current observational history is compatible with being in any of these pocket universes, and thus we are unlikely to be so lucky as to be in the one original parent universe. Thus our universe in all likelihood was literally created by a super-intelligence in a parent universe. We don't need any new evidence to support this conclusion, as it's merely an observation derived from our current best theory.

But then I see them proceed to self-describe as atheist (instead of omnitheist, theist, deist, having a predictive distribution over states of religious belief, et cetera), and many tend to be overtly dismissive of theism. Is this signalling cultural affiliation, an attempt to communicate a point estimate, or what?

To a non-scientifically-literate person, I might say that I think electrons exist as material objects, whereas to a physicist I would invoke Tegmark's idea that all that exist are mathematical structures.

One way to make sense of this is to thi... (read more)

2Document13y
The listener in this case being a theist you're trying to explain your epistemic position to, I assume. (It took me a moment to figure out the context.) Possibly related: "(Hugh) Everett's daughter, Elizabeth, suffered from manic depression and committed suicide in 1996 (saying in her suicide note that she was going to a parallel universe to be with her father" (via rwallace).
5shokwave13y
My gut feeling is the causal flow goes "manic depression -> suicide, alternate universes" rather than "alternate universes -> manic depression -> suicide".
5Vaniver13y
Honestly, I wouldn't be that sure. On this very site I've seen people say their reason for signing up for cryonics was their belief in MWI. It would not surprise me if "suicide -> hell" decreases the overall number of suicides and "suicide -> anthropic principle leaves you in other universes" increases the overall number of suicides.
1ata13y
Really? What's the reasoning there (if you remember)?
1Vaniver13y
The post is here. The reasoning as written is: My comments on the subject (having cut out the tree debating MWI) can be found here.
-1Will_Newsome13y
I meant that a lot of arguments about what kinds of objectives a creator god might have, for example, would be very tricky to do right, with lots of appeals to difficult-to-explain Occamian intuitions. Maybe this is me engaging in typical mind fallacy though, and others would not have this problem. People going crazy is a whole other problem. Currently people don't think very hard about cosmology or decision theory or what not. I think this might be a good thing, considering how crazy the Roko thing was.
2Wei Dai13y
I see. I think at this point we should be trying to figure out how to answer such questions in principle with the view of eventually handing off the task of actually answering them to an FAI, or just our future selves augmented with much stronger theoretical understanding of what constitute correct answers to these questions. Arguing over the answers now, with our very limited understanding of the principles involved, based on our "Occamian intuitions", does not seem like a good use of time. Do you agree?
-1Will_Newsome13y
It seems that people build intuitions about how general super-high-level philosophy is supposed to be done by examining their minds as their minds examine specific super-high-level philosophical problems. I guess the difference is that in one case you have an explicit goal of being very reflective on the processes by which you're doing philosophical reasoning, whereas the sort of thing I'm talking about in my post doesn't imply a goal of understanding how we're trying to understand cosmology (for example). So yes I agree that arguing over the answers is probably a waste of time, but arguing over which ways of approaching answers is justified seems to be very fruitful. (I'm not really saying anything new here, I know -- most of Less Wrong is about applying cognitive science to philosophy.) As a side note, it seems intuitively obvious that Friendliness philosophers and decision theorists should try and do what Tenenbaum and co. do when trying to figure out what Bayesian algorithms their brains might be approximating in various domains, sometimes via reflecting on those algorithms in action. Training this skill on toy problems (like the work computational cognitive scientists have already done) in order to get a feel for how to do similar reflection on more complicated algorithms/intuitions (like why this or that way of slicing up decision theoretic policies into probabilities and utilities seems natural, for instance) seems like a potentially promising way to train our philosophical power. I think we agree that debating e.g. what sorts of game theoretic interactions between AIs would likely result in them computing worlds like ours is probably a fool's endeavor insofar as we hope to get precise/accurate answers in themselves and not better intuitions about how to get an AI to do similar reasoning.

I'm technically some kind of theist, because I believe this world is likely to be a simulation (although I don't believe it in my gut). I tell people I'm an atheist because telling them the more-accurate truth, that I am a theist, conveys negative information because of how they inevitably interpret it.

It's a reasonable thing to point out: Why do LWers criticize theism so heavily when they may be theists?

There's a confusion caused because our usage of the term doesn't distinguish between "theist re. this universe I'm in" and "theist for th... (read more)

0lukstafi13y
Would you like to address your point of view on what the impact is in both cases, or link to relevant discussion? Is it "be on the lookout for miracles"? Why wouldn't we just do our business as usual being in a simulation as opposed to being in a "root universe"?
1PhilGoetz11y
I don't mean that it has to do with which universe we are in. A lot of people believe, for reasons which have never been clear to me, that if a God created the universe, then that God's opinions have special moral status. I was presuming that that God does not have special moral status if it had been created by another God, or through evolution. But I don't know what Christians would say. Possibly they would refuse to consider the scenario.
2lukstafi11y
If God created the universe, then that's some evidence that He knows a lot. Not overwhelming evidence, since some models of creation might not require of the creator to know much.
0shminux11y
They should refuse. Asking wrong questions has been a temptation by the Devil since the times of the original sin. A good Christian should know when to stop.
-1Lumifer11y
Think about it from a slightly different perspective: the claim is that the universe has morality baked into it -- God created such a universe that moral laws are the same as laws of physics. In other words, the claim is that morality is objective and is embedded in reality. It's not an "opinion" at all. In Christainity (or Judaism, or Islam) God cannot have been created (by somebody else of through evolution). In theology that's one of the biggest differences between God and the world -- one is uncreated and one is created.

Tegmark cosmology implies not only that there is a universe which runs this one as a simulation, but that there are infinitely many such universes and infinitely many such simulations. In some fraction of those universes, the simulation will have been designed by an intelligent entity. In some smaller fraction, that entity has the ability to mess with the contents of the simulation (our universe) or copy data out of it (eg, upload minds and give them afterlives). My theism is equal to my estimate of this latter fraction, which is very small.

6Perplexed13y
I'm not sure that this is true. My understanding is that IF a universe which runs this one as a simulation is possible, THEN Tegmark cosmology implies that such a universe exists. But I'm not sure that such a universe is possible. After all, a universe which contains a perfect simulation of this one would need to be larger (in duration and/or size) than this one. But there is a largest possible finite simple group, so why not a largest possible universe? I am not confident enough of my understanding of the constraints applicable to universes to be confident that we are not already in the biggest one possible. There is a spooky similarity between the Tegmark-inspired argument that we may live in a simulation and the Godel/St. Anselm-inspired argument that we were created by a Deity. Both draw their plausibility by jumping from the assertion that something (rather poorly characterized) is conceivable to the claim that that thing is possible. That strikes me as too big of a jump.
9magfrump13y
There isn't a largest finite simple group. There's a largest exceptional finite simple group. Z/pZ is finite and simple for all primes p, and if you think there is a largest prime I have some bad news...
0Perplexed13y
Doooohhh! Thx.
2jimrandomh13y
You're right, that is an additional requirement. Nevertheless, it seems very highly likely to me that such a universe is possible; for it to be otherwise would imply something very strange about the laws of physics. The most-existant universe simulating ours might exist to a degree 1/BB(100) times as much as our universe exists, though; in that case, they would "exist", but not for any practical purposes. This seems more likely than our universe having some property we don't know about that makes it impossible to simulate.
3JoshuaZ13y
If one accepts general Tegmark, is there any natural measure for describing how common different universes should be in any meaningful sense?
2jimrandomh13y
Yes, but unfortunately, there are many measures to choose from, and you can't possibly tell which is correct until you've visited Permutation City and at least a dozen of its suburbs.
2Perplexed13y
I agree with the question. It may make sense to attach "probabilities of existing" to universes arising in a chaotic inflation model, but not, I think, in an "ultimate ensemble" multiverse, which seems to be the one being examined here. But, to be honest, I had never even considered the possibility that a particularly large bubble universe might contain a simulation of a much smaller bubble. Inflation, as I understand it, does make it possible for a simulation of one small piece of physical reality to encompass an entire isolated 'universe'.
2ata13y
Not yet, as far as I know. Big World cosmology seems to be going in the right direction, but it's not yet understood well enough that we should be coming to any epistemological or ethical conclusions based on it.
2Will_Newsome13y
Clarifying: I'm guessing that by 'ability' you mean 'ability and inclination'?
2jimrandomh13y
Right. Actually, forget about both of those; all that matters is whether it actually does modify the simulation's contents or copy out data that includes a mind at least once. And, come to think of it, the intervention would also have to be inside our past or future light cone, which might lower the fraction pretty substantially (it means any outer universe which instantiates our entire infinite universe, but makes only finitely many interventions, doesn't count). Although - there are some interpretations of consciousness under which, upon death, the fraction of enclosing universes which copy out minds doesn't matter, only the proportions of them with different qualities. In that case, the universe would act as though there were no gods or outer universes until you died or performed enough iterations of quantum suicide, after which you'd end up in a different universe. I'm not sure how much credence I give to those interpretations.
1Oligopsony13y
What does "fraction" mean here?
0Leonhart13y
It seems to me that, if we insist on using simulation hypotheses as a model for theism, this has to be narrowed still further. Theism adds the constraint that though $deity is simulating us, no-one is simulating $deity; He's really really real and the buck stops with Him. We live in the floor just above reality's basement; isn't that nice. I think that this might be what Eliezer's quote about "ontological distinctness" refers to, but I'm not sure.
1jimrandomh13y
Monotheism requires that, but theism doesn't. And unless there are some universes that are for some reason impossible to simulate, Tegmark cosmology implies that there are no universes for which there are no universes simulating them. Is-God-of is a two-place predicate.
0TheOtherDave13y
If one were interested in salvaging the correspondence, one could argue that there's a chain of simulators-simulating-simulators and it's that chain (which extends down to "reality's basement") that theists label as a deity. That said, I see no point in allowing ontology to get out ahead of epistemology in this area. Sure, maybe all this stuff is going on. Maybe it isn't. Unless these conjectures actually cash out somehow in terms of different expectations about observable phenomena, there seems little point to talking about them.
0Document13y
Nitpick: Will isn't the only self-identified theist you'd have to convince of that.

Theists are wrong; is theism?

I think this is an interesting question! If rationalists speculated about the origin of the universe, what would they come up with? What if 15 rationalists made up a think-tank and were charged to speculate about the origin of the universe and assign probabilities to speculations? It would be a grievous mistake to begin with the hypothesis of theism, but could they end up with it on their list, with some non-negligible probability?

I don't think so. The main premise of the theistic religions is that an entity (a person? a min... (read more)

3byrnema13y
I'll develop my thoughts about not being able to sensibly apply the description 'agenty' to the creator because wondering why agency should be a key question is what originally motivated my above comment. You can search 'agenty' and find many comments on this page that discuss whether we should speculate that the creator has agency. I found myself wondering throughout these comments what is specifically being meant by this. If the creator is 'agenty', what properties must it have and are those properties necessarily interesting? I could probably look around and find a definition I would like better, but my definition of 'agenty' when I first start thinking about it is that this has meaning in a specifically human context. Broadly, something 'agenty' is something that makes decisions according to a complex decision tree algorithm. This is a human-context-specific definition because "complex" means relative to what we consider complex. A mammal makes complex decisions and thus is 'agenty' while a simple process like water makes simple decisions (described by a small number of equations and the properties of the immediate physical space) and is not agenty. A complex inanimate thing (like 'evolution') and a simple animate thing (like a virus) would give us pause, straining our immediate, concrete conception of agency. I'm willing to say that evolution has agency (it has goals -- long term stable solutions -- and complicated ways of achieving these goals) and water has simple agency. This because in my opinion what was really meant when we made the agency dichotomy between humans and water is that humans have free will and water doesn't. But finally with a deterministic world view, this distinction dissolves. Humans have as much agency as anything else, but our decision algorithm is very complex to us, whereas we can often reliably predict what water will do. Then to apply this concept of agency to the mechanism of creation of the universe... All the rules and stead

Part of the problem here is that there's no clear meaning of the word ‘god’ (taking for granted that ‘theism’ and ‘atheism’ are defined in terms of it). I usually identify as ‘secular humanist’ rather than ‘atheist’, mostly because it's more precise, but also because I have seen people define ‘god’ in such a way that I believe that one might well exist. These have all been very vague definitions (more along pantheistic than monotheistic lines), but they're not gratuitous (like defining ‘god’ to mean, say, my nose), and by these lights I'm merely a (weak)... (read more)

[-][anonymous]13y40

Agreed. I think this is a cultural thing rather than a truly rational thing. I was brought up as an atheist, and would still describe myself as such, but I wouldn't give a zero probability to the simulation argument, or to Tipler's Omega Point, or whatever (I wouldn't give a high probability to either - and Tipler's work post about 1994 has been obvious ravings) and I can imagine other scenarios in which something we might call God might exist. I don't see myself changing my mind on the theism question, but I don't consider it a closed one.

When I abandoned religion, a friend of mine did the same at about the same time. We spoke recently and it turned out that he self-labeled as agnostic, me - "atheist". We discussed this a bit and I said something to the extent that "I do not see a shred of justice in the world that would indicated a working of a personal god; if there is something like a god that runs the universe amorally, we may as well call it physics and get on with it".

It seems that you want to draw the additional distinction of "agenty" things vs. dumb gears, but as long as they only "care" about persons as atmos, vs. moral agents, who cares? It admittedly tickles curiosity, but will hardly change the program...

4Jack13y
What makes you think an agenty, simulator-type god wouldn't care about persons as moral agents?
5Psy-Kosh13y
An agenty simulator type god that actually did care about persons as moral agents would have created a very different universe than this one (assuming they were competent).
5Jack13y
Well if it were chiefly concerned with us having a lot of fun, or not experiencing pain or fulfilling more of our preferences then yes. But maybe the simulator is trying to evolve companions. Or maybe it is chiefly concerned with answering counter-factual questions and so we have to suffer for it to get the right answers... but that doesn't mean the simulator doesn't care about us at all. Maybe it saves us when we die and are no longer needed for the simulation. Or maybe the simulator just has weird values and this is their version of a eutopia.
2Will_Newsome12y
"Companions, the creator seeketh, and not corpses--and not herds or believers either. Fellow-creators the creator seeketh--those who grave new values on new tables."
1jacob_cannell13y
I find that the SA leads us to believe just the opposite. Future posthumans will be descended in one form or another from people alive today. Some of them may be uploads of people who actually were alive today, some of them may have been raised up and new biological humans and uploads, or even just loosely based on human minds through reading and absorbing our culture. If these future posthumans share much of the same range of values that we have, many of them will be interested in the concept of resurrecting the dead - recreating likely simulations of deceased, lost humans from their history - whether personal or general.
0Desrtopa13y
There was already a thread on this. The general consensus seems to be that it isn't practical, if possible.
0jacob_cannell13y
Hmm from my reading of the thread it doesn't look like much of a consensus. I may want to revive this - the arguments against practicality don't seem convincing from an engineering perspective. From a high quality upload's scanned mind one should get a great deal of information about the upload's closest friends, relatives, etc. The data from any one such upload many not be overwhelming, but you'd start with a large population of such uploads. People who were well known and loved would be easier cases, but you could also supplement the data in many cases with low-quality scans from poorly preserved bodies. This should give one prior generation. Going back another previous generation would get murkier, but is still quite possible, especially with all the accessory historical records. The farther back you go, the less 'accurate' the uploads become, but the less and less important this 'accuracy' becomes. For example, assuming I become a posthuman, I will be interested in bring back my grandfather. There a huge space of possible minds that could match my limited knowledge and beliefs about this person I never met. Each of them would fully be my grandfather from my subjective perspective and would fully be my grandfather from their subjective perspective. There is no objective standard frame of reference from which to evaluate absolute claims of personal identity. It is relative.
2Desrtopa13y
But if you simulate anything other than the actual brain states of the people in question, then they won't behave in exactly the same way. No matter how many other people's knowledge of me you integrate, for example, you won't have the data to predict what I'll eat for breakfast tomorrow with any accuracy (because I almost invariably eat breakfast alone.) Tiny differences like this will quickly propagate to create much larger ones between the simulation and the reality. Jump forward a few generations and you have zero population overlap between the new generation of the simulation and the next generation that was born in reality. If you're attempting historical recreation, this would be a pretty useless way to go about it. If you wanted to create a simulation that was an approximation of a particular historical period at one point, but quickly divorced from it as it ran forward, that would be much more plausible, but why would you want to? Everything I can think of that could be accomplished in such a way could more easily be accomplished by doing something else.
0jacob_cannell13y
Sure, but that's not relevant towards the goal. There are no 'actual' or exact brain states that canonically define people. If you created a simulation of an alternate 1950 and ran it forward, it would almost certainly diverge, but this is no different than alternate branches of the multiverse. Running the alternate forward to say 2050 may generate a very different reality, but that may not matter much - as long as it also generates a bunch of variants of people we like. This brings to mind a book by Heinlein about a man who starts jumping around between branches - "Job: a comedy of Justice". Anyway, my knowledge of my grandfather is vague. But I imagine posthumans could probably nail down his DNA and eventually recreate a very plausible 1890 (around when he was born). We could also nail down a huge set of converging probability estimates from the historical record to figure out where he was when, what he was likely to have read, and so on. Creating an initial population of minds is probably much trickier. Is there any way to create a fully trained neural net other than by actually training it? I suspect that it's impossible in principle. It's certainly the case in practice today. In fact, there may be no simple shortcut without going way way back into earlier prehistory, but this is not a fundamental obstacle, as this simulation could presumably be a large public project. Yes the approach of just creating some initial branch from scratch and then running it forward is extremely naive. If you'd like I could think of ten vastly more sophisticated algorithms that could shape the branch's forward evolution to converge with the main future worldline before breakfast. The first thing that pops to mind: The historical data that we have forms a very sparse sampling, but we could use it to guide the system's forward simulation, with the historical data acting as constraints and attractors. In these worlds, fate would be quite real. I think this gives you the general
0Desrtopa13y
We can get to that if you can establish that there's any good reason to do it in the first place. Your justifications for running such simulations have so far seem to hinge on things we could learn from them (or simply creating them for their own sake, it appears that you're jumping between the two,) but if we know enough about the past to meaningfully create the simulations, then there's not much we stand to learn from making them. Yes, history could have branched in different ways depending on different events that could have occurred, we already know that. If you try to calculate all the possibilities as they branch off, you'll quickly run out of computing power no matter how advanced your civilization is. If you want to do calculations of the most likely outcomes of a certain event, you don't have to create a simulation so advanced that it appears to be a real universe from the inside to do that.
0jacob_cannell13y
Excellent! The two are intertwined - we can learn a great deal from our history and ancestors while simultaneous valuing it for other reasons than the learning. Thinking is just a particular form of approximate simulation. Simulation is a very precise form of thinking. Right now all we know about our history is the result of taking a small collection of books and artifacts and then thinking alot about them. Why do we write books about Roman History and debate what really happened? Why do we make television shows or movies out of it? Consider this just the evolution of what we already do today, for much of the same reasons, but amplified by astronomical powers of increased intelligence/computation generating thought/simulation. This is what we call a naive algorithm, the kind you don't publish. Calculations of the likely outcomes of certain events are the mental equivalents of thermostat operations - they are the types of things you do and think about when you lack hyperintelligence. Eventually you want a nice canonical history. Not a book, not a movie, but the complete data set and recreation. As it is computed it exists, eventually perhaps you merge it back into the main worldline, perhaps not, and once done and completed you achieve closure. Put another way, there is a limit where you can know absolutely every conceivable thing there is to know about your history, and this necessitates lots of massively super-detailed thinking about it - aka simulation.
6Desrtopa13y
This is the kind of naive forward extrapolation that gets you sci fi dystopias. Most of the things we do today don't bear extrapolating to logical extremes, certainly not this. No I don't. I think you should try asking more people if this is actually something they would want, with knowledge of the things they could be doing instead, rather than assuming it's a logical extrapolation of things that they do want. If I could do that, it wouldn't even bottom the list of things I'd want to do with that power. The simulation doesn't teach us more than we already know about history. What we already know about history sets the upper bound on how similar we can make it. Given the size of the possibility space, we can only reasonably assume that it's different in every way that we do not enforce similarity on it. The simulation doesn't contribute to knowing everything you could possibly know about your history, that's a prerequisite, if you want the simulation to be faithful.
0Jack13y
This would be true if we were equally ignorant about all of history. However, there are some facts regarding history we can be quite confident about- particularly recent history and the present. You can then check possible hypotheses about history (starting from what is hopefully an excellent estimation of starting conditions) against those facts you do have. Given how contingent the genetic make-up of a human is on the timing of their conception and how strongly genetics influences who we are it seems plausible a physical simulation of this part of the universe could radically narrow the space of possibilities given enough computing power. Of course parts of the simulation might remain under-determined but it seems implausible that a simulation would tell us nothing new about history as a simulation should be more proficient than humans at assessing the necessary consequences and antecedences to any known event.
2Desrtopa13y
Radically narrow, but given just how vast the option space is, it takes a whole lot more than radically narrowing before you can winnow it down to a manageable set of possibilities. This post puts some numbers to the possible configurations you can get for a single lump of matter of about 1.5 kilograms. In a simulation of Earth, far more matter than that is in a completely unknown state and free to vary through a huge portion of its possibility space (that's not to say that even an appreciable fraction of matter on Earth is free to vary through all possible states, but the numbers are mind boggling enough even if we're only dealing with a few kilograms.) Every unknown configuration is a potential confounding factor which could lead to cascading changes. The space is so phenomenally vast that you could narrow it by a billion orders of magnitude, and it would still occupy approximately the same space on the scale of sheer incomprehensibility. You would have to actively and continuously enforce similarity on the simulation to keep it from diverging more and more widely from the original.
0jacob_cannell13y
Said reference post by AndrewHickey starts with a ridiculous assumption: This is voodoo-quantum consciousness: the idea that your mind-identity somehow depends on details down to the quantum state. This can't possibly be true - because the vast vast majority of that state changes rapidly from quantum moment to moment in a mostly random fashion. There thus is no single quantum state that corresponds uniquely to a mind, rather there is a vast configuration space. You can reduce that space down to a smaller bit representation by removing redundant details. Does it really matter if I remove one molecule from one glial cell in your brain? The whole glial cell? All the glial cells? There is a single minimal representation of a computer - it reduces exactly down to it's circuit diagram and the current values it holds in it's memory/storage. If you don't buy into the idea that a human mind ultimately reduces down to some functional equivalent computer program, than of course the entire Simulation Argument won't follow. Who cares? There could be infinite detail in the universe - we could find that there are entire layers beneath the quantum level, recursing to infinity, such that perfect simulation was impossible in principle .. and it still wouldn't matter in the slightest. You only need as much detail in the simulation as . . you want detail in the simulation. Some details at certain spatial scales are more important than others based on their leverage casual effect - such as the bit values in computers, synaptic weights in brains. A simulation at the human-level scale would only need enough detail to simulate conscious humans, which will probably include simulating down to rough approximations to synaptic-net equivalents. I doubt you would even simulate every cell in the body, for example - unless that itself was what you were really interested in. There is another significant mistake in typical feasibility critique of simulationism: assuming your current knowle
0Desrtopa13y
That assumption is not part of my argument. The states of objects outside the people you're simulating ultimately effect everything else once the changes propagate far enough down the simulation. Underestimating the importance of glial cells could get you a pretty bad model of the brain. But my point isn't simply about the thoughts you'd have to simulate; remove one glial cell from a person's brain, and the gravitational effects mean that if they throw a superball really hard, after enough bounces it'll end up somewhere entirely different than it would have (calculating the trajectories of superballs is one of the best ways to appreciate the propagation of small changes.) Why would you want as much detail in the simulation as we observe in our reality?
0Jack13y
Good point. I'm reconsidering... I wonder what kind of cascade effect there actually is- perhaps there are parts of the simulation that could be done using heuristics and statistical simplifications. Perhaps that could be done to initially narrow the answer space and then the precise simulation could be sped up by not having to simulate those answers that contradict the simplified model? I wonder how a hidden variable theory of quantum mechanics being true would effect the prospects for simulation- assuming a super intelligence could leverage that fact somehow (which is admittedly unlikely).
2jacob_cannell13y
What? ;( Even using the low-res datasets and simple computers available today (by future standards), we are able to simulate chaotic weather systems about a week into the future. Simulating down to the quantum level is overkill to the thousandth degree in most cases, unless you have some causal amplifier - such as a human observing quantum level phenomena down the quantum scale. In that situation the quantum-scale events have a massive impact, so the simulation subdivides space-time down to that scale in those regions. Similar techniques are already employed today in state of the art simulation in computer graphics. There will always be divergences in chaotic systems, but this isn't important. You will never get some exact recreation of our actual history, that's impossible - but you can converge on a set of close traces through the Everett branches. It may even be possible to force them to 'connect' to an approximation of our current branch (although this may take some manual patching).
2Desrtopa13y
Not with great accuracy. And that's only a week; making accurate predictions gets exponentially more difficult the further into the future you go. And human society is much more chaotic (contains far more opportunities for small changes to multiply to become large changes) than the weather. The weather is just one of the chaos factors in human society.
0jacob_cannell13y
I'm not sure about this in general - why do you think that prediction accuracy has an exponential relation to simulation time across the entire space of possible simulation algorithms? Yes and no. Human society is largely determined by stuff going on in human brains. Brains are complex systems, but like computers and other circuits they can be simulated extremely accurately at a particular level of detail where they exhibit scale separation, but are essentially randomly chaotic when simulated at coarser levels of detail. Turbulence in fluid systems, important in weather, has no scale separation level and is chaotic all the way down.
1Desrtopa13y
Basic principle of chaos theory. Small scale interferences propagate to large scale interferences, while tiny scale interferences propagate to small scale, and then to large scale. If you try to calculate the trajectory of a superball, you can project it for a couple bounces just modeling mass, elasticity and wind resistance. A couple more? You need detailed information on air turbulence. One article, which I am having a hard time locating, calculated that somewhere in the teens of bounces you would need to integrate the positions of particles across the observable universe due to their gravitational effects. A kid throws a superball. Bounce, bounce, bounce, bounce, bounce, bounce, bounce, bounce, crash. It bounces out into the street, and they're hit by a car chasing after it. In a matter of seconds, deviations on a particulate level have propagated to the societal level. The lives of everyone the kid would have interacted with will be affected, and by extension, the lives of everyone that those people would have interacted with, and so on. The course of history will be dramatically different than if you had calculated those slight turbulence effects that would have sent the ball off in an entirely different direction. You can expect many history altering deviations like this to occur every minute.
1jacob_cannell13y
I'm aware of the error propagation issues and they can be magnified in some phenomena up spatial scales. A roll of the dice in vegas is probably a better example of that than your ball. I should point out though that this is all somewhat tangential to our original discussion. But nonetheless . .. None of the examples you give actually prove that simulation fidelity has an exponential relation to simulation time across the entire space of possible simulation algorithms. Intuitively it seems to make sense - as each particle's state is dependent on a few other particles it interacts with at each timestep the information dependency fans out exponentially over time. However intuitions in these situations often can be wrong, and this is nothing like a formal proof. Getting back to the original discussion, none of this is especially relevant to my main points. Many of the important questions we want to answer are probabilistic - how unlikely was that event? For example to truly understand the likelihood of life elsewhere in the galaxy and get a good model of galactical development, we will want to understand the likelihood of pivotal events in earth's history - such as the evolution of hominids or the appearance of early life itself. You get answers to those only by running many simulations and mapping out branches of the metaverse. The die roll turns out differently in each and in some this leads to different consequences. In some cases, especially in an initial simulations, one can focus on the branches that match most closely to known history, and even intervene or at least prune to enforce this. But eventually you want to explore the entire space.
2JoshuaZ13y
While this is a good way to get such data, it isn't the only way . If we expand enough to look at a large number of planets in the galaxy we should arrive at decent estimates simply based on empirical data.
0jacob_cannell13y
Certainly expanding our observational bubble and looking at other stars will give us valuable information. Simulation is a way of expanding on that. However, its questionable when or if we ever will make it out to the stars. Lightyears are vast for humans, but they will be even vaster units of time for posthuman civilizations that think thousands or millions of times faster than us. It could be that the vast cost of travelling out into space is never worthwhile and those resources are always best used towards developing more local intelligence. John Smart makes a pretty good case for inward expansion always trumping outward expansion.
0Desrtopa13y
If you do probabilistic estimates based on large numbers of simulations though, you can cut down on the fidelity of the simulations dramatically. I know that this is something you're arguing for, but really, there's no good reason to make the simulations as detailed as the universe we observe. To take forest succession modeling programs (something I have more experience with than most types of computer modeling) as an example, there are some ecological mechanisms that, if left out, will completely change the trends of the simulation, and some that won't, and you can leave those that don't out entirely, because your uncertainty margins stay pretty much the same whether you integrate them or not. If you created a computer simulation of the forest with such fidelity that it contained animals with awareness, you'd use up a phenomenal amount of computing power, but it wouldn't do you any good as far as accuracy is concerned. If you care about the lives of the people in the past for their own sake, and are capable of creating high fidelity recreations of their personality from the data available to you, why not upload them into the present so you can interact with them? That, if possible, is something that people actually seem to want to do. That's true, they don't constitute a formal proof. Maybe a proof already exists and I'm not aware of it, or maybe not, but regardless, given the information available to us in this conversation, right now, the weight of evidence is clearly on the side of such a simulation not being possible over it being possible. You don't get high probability future predictions by imagining ways in which our understanding of chaos theory maybe gets overhauled.
0Jack13y
What about genetic mutations from stray cosmic rays? Would evolution have occurred the same way? Would my genetic code be one allele different? I feel like the quantum level would matter a lot more the earlier you started your simulation. I'm worried about how motivated my cognition is. I really want this to be possible for very personal reasons- so I am liable to grasp tightly to any plausible argument for close-enough simulation of dead people.
0jacob_cannell13y
Well if you started a sim back a billion years ago, well yes I expect you'd get a very different earth. How different is an interesting open problem. Even if hominid-like creatures develop say 10% of the time after a billion years (reasonable), all of history would likely be quite different each time. For a sim built for the purpose of resurrection, you'd want to start back just a little earlier - perhaps just before the generation was born. Getting the DNA right might actually be the easiest sub-problem. Simulating biological development may be tougher than simulating a mind, although I suspect it would get easier as development slows. Hopefully we don't have to simulate all of the 10^13 cells in a typical human body at full detail, let alone the 10^14 symbiotes in the human gut. It's still an open question whether it's even possible in principle to create a conscious mind from scratch. Currently complex neural net systems must be created through training - there is no shortcut to just fill in the data (assuming you don't already have it from a scan or something which of course is inapplicable in this case). So even a posthuman god may only have the ability to create conscious infants. If that's the case, you'd have the DNA right and then would have to carefully simulate the entire history of inputs to create the right mind. You'd probably have to start with some actors (played by AIs or posthumans) to kickstart the thing. If that's the general approach, then you could also force alot of stuff - intervene continuously to keep the sim events as close to known history as possible (perhaps actors play important historical roles even when it's running? open). Active intervention would of course make it much more feasible to get minds closer to the ones you'd want. Would they be the same? I think that will be an open philosophical issue for a while, but I suspect that you could create minds this way that are close enough. This is interesting enough that it coul
0wedrifid13y
How on earth can we know that 10% is reasonable?
0jacob_cannell13y
The "even if" and "say" should indicate the intent - it wasn't even a guess, just an example used as an upper bound. I'm not convinced the evolution of hominids is a black swan, but it's not an issue I've researched much.
0wedrifid13y
The (reasonable) assertion was what struck me.
0jacob_cannell13y
Most of the things we do today are predictable developments of what previous generations did, and this statement holds across time. There is a natural evolutionary progression: dreams/daydreams/visualizations -> oral stories/mythologies -> written stories/plays/art -> movies/television->CG/virtual reality/games->large scale simulations It isn't 'extrapolating to logical extremes', it is future prediction based on extrapolation of system evolution. Of course it does. What is our current knowledge about history? It consists of some rough beliefs stored in the low precision analog synapses of our neural networks and a bunch of word-symbols equivalent to the rough beliefs. With enough simulation we could get concise probability estimates or samples of the full configuration of particles on earth every second for the last billion years - all stored in precise digital transistors, for example. This is true only for some initial simulation, but each successive simulation refines knowledge, expands the belief network, and improves the next simulation. You recurse. Not at all. Given an estimate on the state of a system at time T and the rules of the system's time evolution (physics), simulation can derive values for all subsequent time steps. The generated data is then analyzed and confirms or adjusts theories. You can then iteratively refine. For a quick primitive example, perhaps future posthumans want to understand in more detail why the roman empire collapsed. A bunch of historian/designers reach some rough consensus on a model (built on pieces of earlier models) to build an earth at that time and populate it with inhabitants (creating minds may involve using stand in actors for an initial generation of parents). Running this model forward may reveal that the lead had little effect, that previous models of some roman military formations don't actually work, that a crop harvest in 32BC may have been more important than previously thought .. and so on.
4wedrifid13y
With the help of hindsight bias.
3Desrtopa13y
As wedrifid says, in the light of hindsight bias. Instead of looking at the past and seeing how reliably it seems to lead to the present, try looking at people who actually tried to predict the future. "Future prediction based on extrapolation of system evolution" has reliably failed to make predictions about the direction of human society that were both accurate and meaningful. Or you could very easily find them removing the lead from their pipes and wine, and changing their military formations. If you don't already know what their crop harvest in 32BC was like, you can practically guarantee that it won't be the same in the simulation. This is exactly the kind of use that, as I pointed out earlier, if you had enough information to actually pull it off, you wouldn't need to.
0jacob_cannell13y
I'll just reiterate my response then: Any information about a physical system at time T reveals information about that system at all other times - places constraints on it's configuraiton. Physics is a set of functions that describe the exact relations between system states across time steps, ie the temporal evolution of the system. We developed physics in order to simulate physical systems and predict and understand their behavior. This seems then to be a matter of details - how much simulation is required to produce how much knowledge from how much initial information about the system. For example, with infinite computing power I could iterate through all simulations of earth's history that are consistent with current observational knowledge. This algorithm computes the probabilities of every fact about the system - the probability of a good crop harvest in 32BC in Egypt is just the fraction of the simulated multiverse for which this property is true. This algorithm is in fact equivalent to the search procedure in the AIXI universal intelligence algorithm.
2wedrifid13y
I do not believe this is correct. In particular the 'just a' is not accurate. Approximate simulation is a particular kind of thinking not the reverse.
0jacob_cannell13y
I'm willing to try on your taxonomy but don't quite understand it. The term thinking certainly covers a wide variety of computations, but perhaps the most important is prediction. Does this sound more accurate: Cortical-forward-simulation is just a particular form of approximate simulation. Simulation in general encompasses all the most precise forms of prediction.
3wedrifid13y
More accurate, but still not right. Simulation just doesn't have special privileges. Again, the general, absolute claim of "all the most" invalidates the position. You can make and even logical prove precise predictions without simulating.
0jacob_cannell13y
How? Got an example?
3JoshuaZ13y
If I know an algorithm that outputs 1 or 0 depending on whether the input was prime or not, I can use a different prime checking algorithm without running the whole thing. So, for example, if the algorithm is naive trial division, I can predict its result very quickly using something like Agrawal's algorithm or some variant of Miller-Rabin. This example is in some ways a toy example, but it isn't obvious that one wouldn't have similar examples for more complicated phenomena.
1wedrifid13y
And any example is sufficient to reject a general absolute claim.
-5[anonymous]13y
2Dr_Manhattan13y
Not wouldn't, doesn't. And I think it doesn't due to lack of evidence.

I'm in the 'everything that can exist does so; we're a fixed point in a cloud of possibilities' camp. I'm also an atheist because I see theism as an extra-ordinarily arbitrary and restrictive constraint on what should or must be true in order for us to exist.

It's simply too narrow and unjustified for me to take seriously, and the fact that its trappings are naive and full of wishful thinking and ulterior motives means I certainly don't.

-1Will_Newsome13y
The way I've been envisioning theism is as a pretty broad class of hypotheses that is basically described as 'this patch of the universe we find ourselves in is being computed by something agenty'. What is your conception of theism that makes it more arbitrary and restrictive than this?
0rosyatrandom13y
Since my metaphysical position is (and I'm going to have to come up with a better term for it) pan-existence, having gods that create and influence things requires that those possibilities where they don't (or where other, similar-but-different gods do) are somehow rendered impossible or unlikely. Gods being statistically significant requires some metaphysical reason for them to be so simply in order to stop the secular realities dominating, and the arbitrary focus of theistic gods on humanity and our loose morals only serves to make them ever more over-specified and unlikely.
0jacob_cannell13y
The answer is simple: evolution. That which replicates is more (a priori) likely than that which does not. Out of the space of universes, those that spawn many sub-universes will statistically dominate. Just another rewording of the SA.
0wedrifid13y
I dispute the 'a priori' claim. There are cases where this would not be so. I think this is an a posteriori conclusion on the order of 'sun will come up next Tuesday'.
0Will_Sawin13y
Rosy asked for significant probability mass on God-endowed universes. Jacob's argument works a priori, not necessarily, but with significant probability mass.
0wedrifid13y
I believe you are mistaken (on this overwhelmingly unimportant question of semantics.) The cause and consequence of replication are rather critical for whether being the kind of pan existential god universe thing that replicates will make said universe more prolific.
0Will_Sawin13y
and two highly plausible answers to the questions of cause and consequence are "because of certain features of the universe" "that are preserved with high probability by replication" Compare to "Well there's this 'sun' thing, a giant glowing ball apparently tracing a circular path that, for about half its arc, is obstructed by a large object, creating a sequence of distinct periods in which this sun is visible, one of which will be called Tuesday, ..."
0wedrifid13y
You seem to have introduced new assumptions.
0Will_Sawin13y
My assumptions have significant probability a priori.

Well, agents pretty much tend to be complicated things that need to be explained in terms of more basic things. So if some sort of agent in some sense deliberately created our world... that agent still wouldn't be the most fundamental thing, it would need to be explained in terms of more basic principles. Somewhere along the line there'd have to be "simple math" or such. (Even if somehow you could have an infinite hierarchy of agents, then the basic math type explanation would have to explain/predict the hierarchy of agents.)

As far as "whate... (read more)

When you talk about the whooole Universe, you should not artificially exclude the intelligent creator from it. And if you do include it, then your question can be rephrased like this: Is it possible that the interaction graph of our Universe has a strange hourglass shape with us in the lower bulb, and some intelligent creator in the upper bulb? I say very unlikely.

The simulation argument may suggest some weird interconnected network of bulbs, but that has nothing to do with theism. When and if humanity becomes aware of our simulators, our reaction will not... (read more)

2JamesAndrix13y
I don't see how that can really happen. I've never heard a non-hierarchical simulation hypothesis.
3Vladimir_Nesov13y
Consider an agent that has to simulate itself in order to understand consequences of its own decisions. Of course, there's bound to be some logical uncertainty in this process, but the agent could have exact definition of itself, and so eventually ability to see all the facts. For two agents, that's a form of acausal communication (perception). (This is meaningless only in the same sense as ordinary simulation hypothesis is meaningless.)
3Document13y
It's one of the implications of a universe that can compute actual infinities; it's been proposed in ficton, but I don't know about beyond that.
6DanielVarga13y
That is correct, and an even better fictional example is the good short story titled I don't know, Timmy, being God is a big responsibility. But this is not exactly what I meant here. I don't propose any non-hierarchical or infinite simulation hypothesis. Rather, all I am saying is that it is not a logical impossibility that two Universes have such a weird yin-yang simulated-simulant relationship. (Even in perfect isolation, just the two of them, without invoking an infinite chain of universes.) Obviously it is acausal, but that is a probabilistic, thermodynamic kind of improbable rather than logical impossible. Maybe an easier such example is a spatially centrally symmetric Universe, where you can meet your exact clone who always does what you do. Or my very favorite, the temporally symmetric Universe, a version of the Gold Universe. Or a Hinduist Universe where time goes in circles. The point is, the idea that we live in a constructed, causally almost-but-not-perfectly isolated part of the Universe seems just an aesthetically displeasing corner case when discussed in the context of all these imaginable interaction networks.

There's not enough evidence to locate the hypothesis, so while I technically give it a non-zero probability, that probability is not high enough for me to consider it worth significant time to investigate.

As for arguing against it in public: at most one human religion can be true. All the others must be false. So decreasing the amount of religion in the world improves net accuracy. Also and perhaps more importantly, religion is a major source of Dark Side Epistemology. So on the meta-level, minimizing the influence of religion will help people become more rational.

7wedrifid13y
That line works a lot better for 'Jehovah' than 'theism', especially if you apply the latter term liberally.
-2JoshuaZ13y
Huh? I would think if anything it is the other way around. We have something which locates the Jehovah hypothesis, ancient texts claiming the entity's intervention and modern individuals claiming to communicate with the entity. The real issue is that after locating, there are much better explanations for the data.

If you think that it's easier to locate the hypothesis of Jehovah than the hypothesis of theism, then you're falling victim to a variation of the conjunction fallacy. Belief in Jehovah is itself a variety of theism.

Nevertheless, I agree with you that there's plenty of evidence to locate the hypothesis of Jehovah (and therefore there is at least that much to locate theism), just very little evidence to confirm it when it's examined.

6JoshuaZ13y
Yes, you're right. That's an awful conjunction fallacy. Almost textbookish. Ugh.
0Miller13y
I don't think I understand what 'locate the hypothesis is'. I do know what the conjunction fallacy is. I suspect the confusion here is my own.. You can identify a dog with more certainty than identify a mammal, even though all dogs are mammals. What did I miss?
4JoshuaZ13y
Locating a hypothesis means to have enough evidence for a hypothesis that one can say that the hypothesis is worth considering at some minimal level. This is necessary because humans have limited cognitive capability so we can't consider every possible hypothesis out there (we can't even practically list them all). Thus for example, if someone ran up to you on the street and screamed "the mutant aliens are in the sewer. They're powered by draining nuclear power plants!" you probably wouldn't consider the claim much at all, but would rather entertain others (the person is mentally ill, or is engaging in some strange prank would both be more likely). Toby's point was that my claim that the Jehovah hypothesis could be more easily located than the theist hypothesis must be wrong. Since the theist hypothesis is implied by (or encompasses depending on how you look at it) the Jehovah hypothesis, anything that located the Jehovah hypothesis must be locating the more general theist hypothesis. This is a common cognitive error that humans make called the conjunction fallacy, where people will assign a higher probability to something more specific than something general, even though the general thing is entailed by the specific thing. I'm a bit embarrassed by that actually, since it shows serious failings on my part as a rationalist.
2TobyBartels13y
The reason that I said ‘a variation of the conjunction fallacy’ is that the standard conjunction fallacy that I know is about assigning probabilities to propositions rather than attending to them. (You might choose to attend to something with a fairly low probability, for example, if its expected consequences are significant enough to overcome this.) Nevertheless, to consider the possibility that Jehovah exists, you must consider the possibility that a god exists.
0Miller13y
Wow that was fast. I was writing an edit, after looking up the wiki, when I refreshed and it looked almost exactly like your first paragraph. Yes, in absolute probability terms theism must be more probable than jehovah. Thus, the conjunction fallacy. At first glance the terminology 'locate the hypothesis' is rather non-intuitive. I'm going to put some consideration, and I don't think this is the appropriate place anyway, before commenting further on that.
3Desrtopa13y
Hopefully this should clear things up.

I think the theism/atheism debate is considered closed in the following sense: no one currently has any good reasons in support of theism (direct evidence, or rational/Bayesian arguments). We can't say that such a reason won't show up in the future, but from what we know right now, theism just isn't worth considering. The territory, from all indications, is Godless (and soulless, for that matter), so the map should reflect that.

0PhilGoetz13y
The argument that we probably live in a simulation is the specific argument in support of theism that the OP invokes (but does not mention specifically).
0jacob_cannell13y
I may add that the SA forces us to adopt theism as a consequence of current physical theory, not as some modification to current theory for which we require new evidence, and this is what makes it especially powerful. I was an atheist until I updated on the SA, and I have yet to find any rational opposition to it.
-2Davidmanheim13y
When you say there are no good reasons in support of theism, I assume you mean the truth of theism, not the idea that it may create positive externalities? Or are you claiming that there is no benefit to theism whatsoever? If the territory is to be faithfully represented, we cannot say that the existence of a deity is a necessary component, but that doesn't necessarily imply that the existence of religion is a pure negative.
0Dreaded_Anomaly13y
Yes, I was just talking about the truth of theism. The existence of religion isn't a pure negative, but I think the human race could do better.

What about those few of us who don't believe that the Simulation Argument is most probably true ? Don't get me wrong, it could be true, I just don't see any evidence to suppose that it is.

On that note, I always understood the word "theism" to mean "gods exist, and they interfere in the workings of our Universe in detectable ways". Isn't someone who believes in entirely unfalsifiable gods functionally equivalent to an atheist ?

0TheOtherDave12y
If I believe in unfalsifiable gods who prefer that I behave in certain ways (though they do not provide me with any evidence of that preference), and I value the preferences of those gods enough to change my behavior accordingly, then I will behave differently than if I do not believe in those gods or do not value their preferences. That alone would make Dave-the-atheist not functionally equivalent to Dave-the-theist-without-evidence, wouldn't it?
0Bugmaster12y
Technically, yes, but atheists also behave differently from each other, for all kinds of reasons. If Dave-the-theist truly believes that his gods are unfalsifiable, then he probably won't be seeking to convert others to his faith (since attempting to do so would be futile by definition). At that point, he's just like any atheist with an opinion.
0TimS12y
Why does the unfalsifiability of god show that believers won't proselytize?
0Bugmaster12y
A truly unfalsifiable god does not, by definition, provide any evidence of its existence. Thus, there's no "good news" to be spread, since a world with the god in it looks exactly the same as a world with the god.
0TheOtherDave12y
Sure there is. For example, the Good News might be "God will reward those who worship him as follows: {blah blah blah} after they die." Unfalsifiable, but certainly good to know if true. The fact that you demand evidence before adopting such a belief is of no particular interest to Dave-the-theist-without-evidence.
0Bugmaster12y
This is a falsifiable claim, assuming that we have some evidence of the afterlife. If we have no such evidence, then, in order for this to count as good news, the theist would first have to convince me that there's an afterlife. In the absence of evidence, how is he going to convince anyone that his unfalsifiable belief is true ?
0TheOtherDave12y
Agreed that given evidence of the afterlife, it's a falsifiable claim, and lacking such evidence it's unfalsifiable. I know of no such evidence, so I conclude it's unfalsifiable. Do you know of any such evidence? If not, do you also conclude that it's unfalsifiable? What you seem to be implying is that there exist no (or negligible numbers of) people in the real world who can be convinced of claims for which there is no evidence, which is demonstrably false. Are you in fact asserting that, or am I completely misunderstanding you?
1Bugmaster12y
Yes, I conclude that most kinds of afterlife are unfalsifiable. Some are falsifiable, but they are in the minority: for example, if your religion claims that the dead occasionally haunt the living from beyound the grave, that's a falsifiable claim. Sort of. I would agree with this sentence as it is stated, with the caveat that what most people see as "evidence", and what you and I see as "evidence", are probably two different things. To use a crude example, most Creationists believe that the complexity of the natural world is evidence for God's involvement in its creation. Many theists believe that the feelings and emotions they experience after (or during) prayer are caused by their gods' explicit response to the prayer, which is also a kind of evidence. Sure, you and I would probably discount these things as cognitive biases (well, I know I would), but that's beside the point; what matters here is that the theist thinks that the evidence is there, and thus his gods are falsifiable. When theists proselytize, they often use these kinds of evidence to convert people. By contrast, someone who believes in an explicitly unfalsifiable god would not attribute any effects (mental or physical) to its existence, and thus does not have a workable way to convince others. The best he could say is, "you should believe as I do because it's a neat self-improvement technique", or something to that extent.
0TheOtherDave12y
(shrug) Sure, if we expand the meaning of "evidence" to include things we don't consider evidence, then I agree that my earlier statement becomes false.
0Bugmaster12y
Who are "we", in this case ? A typical theist does believe that he has evidence for his falsifiable god. He may be wrong about this, of course (and most probably is), but that's a matter for another debate. I was under the impression, though, that we were discussing atypical theists: those who believe that their gods are explicitly unfalsifiable. They are deliberately stating, "there's no way anyone could determine by any means whether my gods exist or not"; this is directly opposite to stating something like, "look at how complex life is, only a god could've created all that".
0TheOtherDave12y
Hm. It's possible that I've lost the thread of what we're discussing. It seems to me to follow from what you've said that a theist who explicitly believes their belief in god is unfalsifiable, therefore necessarily explicitly believes there to be no evidence for that belief, therefore necessarily believes that proselytizing others is necessarily futile (since everyone requires evidence to adopt such beliefs, and therefore they believe that everyone requires evidence, and since they know they have no evidence, they know they cannot convince anyone), therefore is functionally equivalent to an atheist, who is functionally defined by their unwillingness to proselytize. Have I followed that correctly? If not, can you provide a corrected summary?

(1) My discussion with a theist today settled on the issue whether to even accept that a "higher domain" creates a "lower domain" for a good purpose. My argument is: why waste reality?

(2) There is a somewhat false duality between creation and discovery: whether the performer determines the result, or the object determines the result, can be relative to the modeling faculty of the observer. And since we as observers and simultaneously "the object" have free will, from our perspective we are in any case rather discovered than created. And as long as God does not act upon the discovery, it is inconsequential.

Does naturalism vs. supernaturalism strike you as controversial? If not, what question is left?

I personally use "naturalist" to describe myself instead of "atheist" or "agnostic" because I believe it captures my beliefs much more strongly- I don't have certainty there is no omnipotent entity, and I am more committed than just shrugging my shoulders. Supernaturalism is right out, and most varieties of naturalistic theism don't hold water.

3Perplexed13y
According to Wikipedia, a naturalist is usually understood to be something different than a proponent of naturalism. Common usage tends to be more confused about the distinction between a naturalist and a naturist.
0DSimon13y
I've run into problems with "naturalist" with people thinking that it means I support organic farming, or alternative medicine, or similar things that tend to get marketed with the adjective "natural". I've had better luck with "materialist", though that also has some pop-culture implications that I'm not trying to express.
3ata13y
Yeah, I avoid "materialist" for that reason. I usually go with "physicalist" for that sort of thing (or "reductionist" if I'm talking to someone who I think won't immediately misinterpret it).
0DSimon13y
Yeah, "physicalist" is good, I may have to start using that.

Added: The answer to the question raised by the post is "Yes, theism is wrong, and we don't have good words for the thing that looks a lot like theism but has less unfortunate connotations, but we do know that calling it theism would be stupid."

No, no! Don't go back on your excellent question because the LessWrong-affiliationist-zombies downthumb-bombed it. You defined theism in a way so that your question is valid.

By theism I do not mean the hypothesis that Jehovah created the universe. (Well, mostly.) I am talking about the possibility of agenty processes in general creating this universe, as opposed to impersonal math-like processes like cosmological natural selection.

That is emphatically not what people like Alvin Plantinga are talking about. Simulation argument provides no support for omni-benevolent omni-potent omni-scient omni-present entities; I don't know why you bring it up.

And if you've been reading Luke's blog, you probably already know that one of the... (read more)

2lukeprog13y
gwern, Plantinga's Free Will Defense is not an argument for theism. The conclusion of the free will argument is that it is not logically impossible for God and evil to co-exist. That is an extremely modest conclusion on the part of the theist.
0gwern13y
We observe a lack of evidence of contradictions in the concept of god; and absence of evidence is evidence of absence. Of course the FWD increases our probability for God if we accept it; what else could it possibly do, decrease it? The most charitable interpretation I can put on your comment is that you are confusedly saying 'yes, but it doesn't increase it by much' when I'm pointing out that 'it increases by some non-zero amount, however modest that amount may be'.
1lukeprog13y
Okay, I see what you mean. Thanks for clarifying!
2Dreaded_Anomaly13y
Beyond that, it's just not a very good argument. If the entity was omnipotent, it could have given us free will without creating evil. At the least, it could have created less evil by giving all humans force fields, so all we could do to harm each other would be to gossip and insult.

If you don't mind my asking, how did it come to be that you were raised to believe that convincing arguments against theism existed without discovering what they are? That sounds like a distorted reflection of a notion I had in my own childhood, when I thought that there existed a theological explanation for differences between the Bible and science but that I couldn't learn them yet; but to my recollection I was never actually told that, I just worked it out from the other things I knew.

0Will_Newsome13y
I knew some convincing arguments against theism, but I suppose what I explicitly did not know of were counterarguments to the theistic counterarguments against those atheistic convincing arguments, because I was quick to dismiss the theistic counterarguments in the first place.

The answer to the question raised by the post is "Yes, theism is wrong, and we don't have good words for the thing that looks a lot like theism but has less unfortunate connotations, but we do know that calling it theism would be stupid."

Sure we do: it is called "intelligent design" - or more specifically, intelligent design of life and/or the universe.

My article on the topic: Viable Intelligent Design Hypotheses.

3SRStarin13y
Your general point in your linked piece is sound, because one can imagine eventually falsifying at least some of the proposed theories you list, but you do wrong to say Kitzmiller is problematic. It was a legal finding, based on testimony and hard evidence, that the folks claiming that Intelligent Design was science, were in fact tantamount to a conspiracy to dress "Creationism" in new clothes. Creationism had already been declared a fundamentally religious doctrine, and not a scientific theory. That was settled law. The folks who brought in ID actually had discussion with one another about how best to convert Creationist texts into ID texts and pamphlets without them being recognizable as creationism. These were charlatans of the worst sort, caught in their own lies. I suggest reading the decision.