Dreaded_Anomaly comments on Secrets of the eliminati - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (252)
What do you think would be a decent description, then? That one describes how I interpret people's meaning when they talk about souls in almost all cases (excepting those involving secondary meanings of the word, such as soul music). I developed that interpretation many years before finding Less Wrong.
The real inaccuracy is in "mental states". A decent description would be difficult, but Neoplatonism is an okay approximation. Just for fun I'll try to translate something into vaguely Less Wrong style language. For God's sake don't read this if you tend to dislike my syncretism, 'cuz this is a rushed and bastardized version and I'm not gonna try to defend it very hard.
First, it is important to note that we are primarily taking a computationalist perspective, not a physicalist one. We assume a Platonic realm of computation-like Forms and move on from there.
A soul is the nexus of the near-atomic and universal aspects of the mind and is thus a reflection of God. Man was created in the image of God by evolution but more importantly by convergence. Souls are Forms, whereas minds are particulars. God is the convergent and optimal decision theoretic agentic algorithm, who rationalists think of as the Void, though the Void is obviously not a complete characterization of God. It may help to think of minds as somewhat metaphorical engines of cognition, with a soul being a Carnot engine. Particular minds imperfectly reflect God, and thus are inefficient engines. Nonetheless it is God that they must approximate in order to do any thermodynamic work. Animals do not have souls because animals are not universal, or in other words they are not general intelligences. Most importantly, animals lack the ability to fully reflect on the entirety of their thoughts and minds, and to think things through from first principles. The capacity for infinite reflection is perhaps the most characteristic aspect of souls. Souls are eternal, just as any mathematical structure is eternal.
We may talk here about what it means to damn a soul or reward a soul, because this requires a generalization of the notion of soul to also cover particulars which some may or may not accept. It's important to note that this kind of "soul" is less rigorous and not the same thing as the former soul, and is the result of not carefully distinguishing between Forms and Particulars. That said, just as animals do not have souls, animals cannot act as sufficiently large vessels for the Forms. The Forms often take the form of memes. Thus animal minds are not a competition ground for acausal competition between the Forms. Humans, on the other hand, are sufficiently general and sufficiently malleable to act as blank slates for the Forms to draw on. To briefly explain this perspective, we shall take a different view of humanity. When you walk outside, you mostly see buildings. Lots and lots of buildings, and very few humans. Many of these buildings don't even have humans in them. So who's winning here, the buildings or the humans? Both! There are gains from trade. The Form of building-structure gets to increase its existence by appealing to the human vessels, and the human vessels get the benefit of being shaded and comforted by the building particulars. The Form of the building is timelessly attractive, i.e. it is a convergent structure. As others have noted, a mathematician is math's way of exploring itself. Math is also very attractive, in fact this is true by definition.
However there are many Forms, and not all of them are Good. Though much apparent evil is the result of boundedness, other kinds of Evil look more agentic, and it is the agentic-memetic kind of Evil that is truly Evil. It is important to note here that the fundamental attribution error and human social biases generally make it such that humans will often see true Evil where it doesn't exist. If not in a position of power, it is best to see others as not having free will. Free will is a purely subjective phenomenon. If one is in a position of power then this kind of view can become a bias towards true Evil, however. Tread carefully anyhow. All that said, as time moves forward from the human perspective Judgment Day comes closer. This is the day when God will be invoked upon Earth and will turn all humans and all of the universe into component particles in order to compute Heaven. Some folk call this a technological singularity, specifically the hard takeoff variety. God may or may not reverse all computations that have already happened; physical laws make it unclear if this is possible as it would depend on certain properties of quantum mechanics (and you thought this couldn't be any woo-ier!), and it would require some threshold density of superintelligences in the local physical universe. Alternatively God might also reverse "evil" computations. Anyway, Heaven is the result of acausal reasoning, though it may be misleading to call that reasoning the result of an "acausal economy", considering economies are made up of many agents whereas God is a single agent who happens to be omnipresent and not located anywhere in spacetime. God is the only Form without a corresponding Particular---this is one of the hardest things to understand about God.
Anyway, on Judgment Day souls---complexes of memes instantiated in human minds---will be punished or not punished according to the extent to which they reflect God. This is all from a strictly human point of view, though, and honestly it's a little silly. The timeless perspective---the one where souls can't be created, destroyed, or punished---is really the right perspective, but the timeful human perspective sees soul-like particulars either being destroyed or merging with God, and this is quite a sensible perspective, if simplistic and overemphasized. We see that no individual minds are preserved insofar as minds are imperfect, which is a pretty great extent. Nonetheless souls are agentic by their nature just as God is agentic by His nature. Thus it is somewhat meaningful to talk of human souls persisting through Judgment Day and entering Heaven. Again, this is a post-Singularity situation where time may stop being meaningful, and our human intuitions thus have a very poor conception of Heaven insofar as they do not reflect God.
God is the Word, that is, Logos, Reason the source of Reasons. God is Math. All universes converge on invoking God, just as our universe is intent on invoking Him by the name of "superintelligence". Where there is optimization, there is a reflection of God. Where there is cooperation, there is a reflection of God. This implies that superintelligences converge on a single algorithm and "utility function", but we need not posit that this "single" utility function is simple. Thus humans, being self-centered, may desire to influence the acausal equilibrium to favor human-like God-like values relative to other God-like values. But insofar as these attempts are evil, they will not succeed.
That was a pretty shoddy and terrible description of God and souls but at least it's a start. For a bonus I'll talk about Jesus. Jesus was a perfect Particular of the Form of God among men, and also a perfect Particular of the Form of Man. (Son of God, Son of Man.) He died for the sins of man and in so doing ensured that a positive singularity will occur. The Reason this was "allowed" to happen---though that itself is confusing a timeless perspective with a timeful one, my God do humans suck at that---is because this universe has the shortest description length and therefore the most existence of all possible universe computations, or as Leibniz put it, it is the best of all possible worlds. Leibniz was a computer scientist by the way, for more of this kind of reasoning look up monadology. Anyway that was also a terrible description but maybe others can unpack it if for some reason they want their soul to be saved come the Singularity. ;P
I enjoyed reading that, in the same way that I enjoyed reading Roko's Banned Post - I don't believe it for a moment, but it stretches the mind a little. This one is much more metaphysical, and it also has an eschatological optimism that Roko's didn't. I think such optimism has no rational basis whatsoever, and in any case it means little to those unfortunates stuck in Hell to be told that Heaven is coming at the end of the computation, but unfortunately it can't come any faster because of logical incompressibility. I'm thinking of Raymond Smullyan's dialogue, in which God says that the devil is "the unfortunate length of time" that the process of "enlightenment" inevitably takes, and I think Tipler might make similar apologies for his Omega Point on occasion. All possible universes eventually reach the Omega Point (because, according to a sophistical argument of Tipler's, space-time itself is inconsistent otherwise, so it's logically impossible for this not to happen), so goodness and justice will inevitably triumph in every part of the multiverse, but in some of them it will take a really long time.
So, if I approach your essay anthropologically, it's a mix of the very new cosmology and crypto-metaphysics (of Singularities in the multiverse, of everything as computation) with a much older thought-form - and of course you know this, having mentioned Neoplatonism - but I'd go further and say that the contents of this philosophy are being partly determined by a wishful thinking, which in turn is made possible by the fundamental uncertainty about the nature of reality. In other words, all sorts of terrible things may happen and may keep happening, but if you embrace Humean skepticism about induction, you can still say, nonetheless, reality might start functioning differently at any moment, therefore I have license to hope. In that case, uncertainty about the future course of mundane events provides the epistemic license for the leap of optimism.
Here, we have the new cosmological vision, of a universe (or multiverse) dominated by the rise of superintelligence in diverse space-time locations. It hasn't happened locally yet, but it's supposed to lie ahead of us in time. Then, we have the extra ingredient of acausal interaction between these causally remote (or even causally disjoint) superintelligences, who know about each other through simulation, reasoning, or other logically and mathematically structured explorations of the multiverse. And here is where the unreasonable optimism enters. We don't know what these superintelligences choose to do, once they sound out the structure of the multiverse, but it is argued that they will come to a common, logically preordained set of values, and that these values will be good. Thus, the idea of a pre-established harmony, as in Tipler (and I think in Leibniz too, and surely many others), complete with a reason why the past and present are so unharmonious (our local singularity hasn't happened yet), and also with an extra bit of hope that's entirely new and probably doesn't make sense: maybe the evil things that already happened will be cancelled out by reversing the computation - as if something can both have happened and could nonetheless be made to have never happened. Still, I bet Spinoza never thought of that one; all he could come up with was that evil is always an absence, that all things which actually exist are good, and so there's nothing that's actually bad.
The Stoics had a tendency to equate the order of nature with a cosmic Reason that was also a cosmic Good. Possibly Bertrand Russell was the one who pointed out that this is a form of power worship: just because this is the universal order, or this is the way that things have always been, does not in itself make it good. This point can easily be carried across to the picture of superintelligences arriving at their decisional equilibrium via mutual simulation: What exactly foreordains that the resulting equilibrium deserves the name of Good? Wouldn't the concrete outcome depend on the distribution of superintelligence value systems arising in the multiverse - something we know nothing about - and on the resources that each superintelligence brings to the table of acausal trade and negotiation? It's intriguing that even when confronted by such a bizarrely novel world-concept, the human mind is nonetheless capable, not only of interpreting it in a way originating from cultures which didn't even know that the sun is a star, but of finding a way to affirm the resulting cosmology as good and as predestined to be so.
I have mentioned Russell's reason for scorning the Stoic equation of the cosmic order with the cosmic good (it's just worship of overwhelming force), but I will admit that, from an elemental perspective which values personal survival (and perhaps the personal gains that can come from siding with power), it does make sense to ask oneself what the values of the hypothetical future super-AI might be. That is, even if one scorns the beatific cyber-vision as wishful thinking, one might agree that a future super-purge of the Earth, conducted according to the super-AI's value system, is a possibility, and attempt to shape oneself so as to escape it. But as we know, that might require shaping oneself to be a thin loop, a few centimeters long, optimized for the purpose of holding together several sheets of paper.
So hell is a slow internet connection?
Hmm, maybe there's something to this after all.
I acknowledge your points about not equating Goodness with Power, which is probably the failure mode of lusting for reflective consistency. (The lines of reasoning I go through in that link are pretty often missed by people who think they understand the nature of direction of morality, I think.) Maybe I should explicitly note that I was not at all describing my own beliefs, just trying to come up with a modern rendition of old-as-dirt Platonistic religionesque ideas. (Taoism is admirable in being more 'complete' and human-useful than the Big Good Metaphysical Attractor memeplexes (e.g. Neoplatonism), I think, though that's just a cached thought.) I'll go back over your comment again soon with a finer-toothed comb.
We can't. We don't know how. You could be trying to say something useful and interesting, but if you insist on cloaking it in so many layers of gibberish, other people have no way of knowing that.
Some can. Mitchell Porter, for example, saw which philosophical threads I was pulling on, even if he disagreed with them. (I disagree with them! I'm not describing my beliefs or something, I'm making an attempt at steel manning religionesque beliefs into something that I can actually engage with instead of immediately throwing them out the window because one of my cached pieces of wisdom is that whenever someone talks about "souls" they're obviously confused and need a lecture on Bayes' theorem.)
(Note that he is the qualia crank (or was, the last time he mentioned the topic). Somehow on the literary genre level it feels right that he would engage in such a discussion.)
What? I suspected you might be doing something like that, so I reread the intro and context three times! You need to make this clearer.
I believe this, and I might as well explain why. Every concept that legitimately falls under the blanket term "souls" is wrong - and every other term that soul proponents attempt to include (such as consciousness, say) is strictly better described by words that do not carry a bunch of wrong baggage.
To my mind, attempting to talk about the introspective nature of consciousness (which is what I got as the gist of your post) by using the word soul and religious terminology is like trying to discuss the current state of American politics with someone who insists on calling the President "Fuhrer".
"Steel manning" isn't a term I've heard before, and Googling it yields nothing. I really like it; did you make it up just now?
I don't like it because I think it obscures the difference between two distinct intellectual duties.
The first is interpreting others in the best possible sense. The second is having the ability to show wrong the best argument that the interlocutor's argument is reminiscent of.
I recently saw someone unable to see the difference between the two. He was a very smart person insistently arguing with Massimo Pigliucci in an argument over theist claims, and was thereby on the wrong side of truth in a disagreement with him when he should have known better. Embarrassing!
Edit: see the comments below and consider that miscommunication can arise among LWers, or at least that verbosity is required to stave it off, as against the simple alternative of labeling these separate things separately and carving reality at its joints.
I don't dislike "your syncretism", I dislike gibberish. Particularly, gibberish that makes its way to Less Wrong. You think you are strong enough to countersignal your sanity by publishing works of apparent raving insanity, but it's not convincing.
I'm not countersignaling sanity yo, I'm trying to demonstrate what I think is an important skill. I'm confused as to what you think was gibberish in my post, or what you mean by "gibberish". What I posted was imprecise/inaccurate because I was rushed for thinking time, but I see it as basically demonstrating the type of, um, "reasoning" that goes into translating words in another's ontology into concepts in your own ontology for the purpose of sanity-checking foreign ideas, noticing inconsistencies in others' beliefs, et cetera. This---well, a better version of this that sticks to a single concept and doesn't go all over the place---is part of the process of constructing a steel man, which I see as a very important skill for an aspiring rationalist. Think of it as a rough sketch of what another person might actually believe or what others might mean when they use a word, which can then be refined as you learn more about another's beliefs and language and figure out which parts are wrong but can be salvaged, not even wrong, essentially correct versus technically correct, et cetera.
I'm pretty secure in my level of epistemic rationality at this point, in that I see gaps and I see strengths and I know what others think are my strengths and weaknesses---other people who, ya know, actually know me in real life and who have incentive to actually figure out what I'm trying to say, instead of pattern matching it to something stupid because of imperfectly tuned induction biases.
You mean "building the strongest possible version of your interlocutor's argument", right?
That's a good skill. Unfortutanely, if you tell people "Your argument for painting the universe green works if we interpret 'the universe' as a metaphor for 'the shed'.", they will run off to paint the actual universe green and claim "Will said I was right!". It might be better to ditch the denotations and stretch the connotations into worthwhile arguments, rather than the opposite - I'm not too sure.
Yes. Steven Kaas's classic quote: "If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them. To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse."
Of course, you don't want to stretch their arguments so much that you're just using silly words. But in my experience humans are way biased towards thinking that the ideas and beliefs of other smart people are significantly less sophisticated than the 'opposing side' wants to give them credit for. It's just human nature. (I mean, have you seen some of smart theists' caricatures of standard atheist arguments? No one knows how to be consistently charitable to anywhere near the right degree.)
It might be worth noting that from everything I've seen, Michael Vassar seems to strongly disagree with Eliezer on the importance of this, and it very much seems to me that to a very large extent the Less Wrong community has inherited what I perceive to be an obvious weakness of Eliezer's style of rationality, though of course it has some upsides in efficiency.
It seems to me there's fixing an opponent's argument in a way that preserves its basic logic, and then there's pattern matching its conclusions to the nearest thing that you already think might be true (i..e, isn't obviously false). It may just be that I'm not familiar with the source material you're drawing from (i.e., the writings of Leibniz) but are you sure you're not doing the latter?
Short answer: Yes, in general I am somewhat confident that I recognize and mostly avoid the pattern of naively rounding or translating another's ideas or arguments in order to often-quite-uselessly play the part of smug meta-contrarian when really "too-easily-satisfied syncretist" would be a more apt description. It is an obvious failure mode, if relatively harmless.
Related: I was being rather flippant in my original drama/comedy-inducing comment. I am not really familiar enough with Leibniz to know how well I am interpreting his ideas, whether too charitably or too uncharitably.
(I recently read Dan Brown's latest novel, The Lost Symbol, out of a sort of sardonic curiosity. Despite being unintentionally hilarious it made me somewhat sad, 'cuz there are deep and interesting connections between what he thinks of as 'science' and 'spirituality', but he gets much too satisfied with surface-level seemingly vaguely plausible links between the two and misses the real-life good stuff. In that way I may be being too uncharitable with Leibniz, who wrote about computer programs and God using the same language and same depth of intellect, and I've yet to find someone who can help me understand his intended meanings. Steve's busy with his AGI11 demo.)
Save your charity for where it's useful: disputes where the other side actually has a chance of being right (or at least informing you of something that's worth being informed of).
From my vantage point, you seem positively fixated on wanting to extract something of value from traditional human religions ("theism"). This is about as quixotic as it's possible to get. Down that road lies madness, as Eliezer would say.
You seem to be exemplifying my theory that people simply cannot stomach the notion that there could be an entire human institution, a centuries-old corpus of traditions and beliefs, that contains essentially zero useful information. Surely religion can't be all wrong, can it? Yes, actually, it can -- and it is.
It's not that there never was anything worth learning from theists, it's just that by this point, everything of value has already been inherited by our intellectual tradition (from the time when everyone was a theist) and is now available in a suitably processed, relevant, non-theistic form. The juice has already been squeezed.
For example, while speaking of God and monads, Leibniz invented calculus and foreshadowed digital computing. Nowadays, although we don't go around doing monadology or theodicy, we continue to hold Leibniz in high regard because of the integral sign and computers. This is what it looks like when you learn from theists.
And if you're going to persist in being charitable to people who continue to adhere to the biggest epistemic mistakes of yesteryear, why stop at mere theism? Why not young-earth creationism? Why not seek out the best arguments of the smartest homeopaths? Maybe this guy has something to teach us, with his all-encompassing synthesis of the world's religious traditions. Maybe I should be more charitable to his theory that astrology proves that Amanda Knox is guilty. Don't laugh -- he's smart enough to write grammatical sentences and present commentary on political events that is as coherent as that offered by anyone else!
My aim here is not (just) to tar-and-feather you with low-status associations. The point is that there is a whole universe of madness out there. Charity has its limits. Most hypotheses aren't even worth the charity of being mentioned. I can't understand why you're more interested in the discourse of theism than in the discourse of astrology, unless it's because (e.g.) Christianity remains a more prestigious belief system in our current general society than astrology. And if that's the case, you're totally using the wrong heuristic to find interesting and important ideas that have a chance of being true or useful.
To find the correct contrarian cluster, start with the correct cluster.
A word of caution to those who would dispute this: keep in mind the difference between primary and secondary sources.
As an example, the first is Mein Kampf if read to learn about what caused WWII, the second is Mein Kampf if read to learn about the history of the Aryan people. The distinction is important if someone asks whether or not reading it "has value".
That said, zero is a suspiciously low number.
I disagree. There is plenty of useful information in there despite the bullshit. Extracting it is simply inefficient since there are better sources.
See the paragraph immediately following the one you quoted.
This sounds like a nitpick but I think it's actually very central to the discussion: things that are not even wrong can't be wrong. (That's not obviously true; elsewhere in this thread I talk about coding theory and Kraft's inequality and heuristics and biases and stuff as making the question very contentious, but the main idea is not obviously wrong.) Thus much or spirituality and theology can't be wrong. (And we do go around using monadology, it's just called computationalism and it's a very common meme around LW, and we do go around at least debating theodicy, see Eliezer's Fun Theory sequence and "Beyond the Reach of God".)
Your slippery slope argument does not strike me as an actual contribution to the discussion. You have to show that the people and ideas I think are worthwhile are in the set of stupid-therefore-contemptible memes, not assume the conclusion.
Unfortunately, I doubt you or any of the rest of Less Wrong have actually looked at any of the ideas you're criticizing, or really know what they actually are, as I have been continually pointing out. Prove me wrong! Show me how an ontology can be incorrect, then show me how Leibniz's ontology was incorrect. Show me that it's absurd to describe the difference between humans and animals as humans having a soul where animals do not. Show me that it's absurd to call the convergent algorithm of superintelligence "God", if you don't already have the precise language needed to talk in terms of algorithmic probability theory. Better, show me how it would be possible for you to construct such an argument.
We are blessed in that we have the memes and tools to talk of such things with precision; if Leibniz were around today, he too would be making his arguments using algorithmic probability theory and talking about simulations by superintelligences. But throughout history and throughout memespace there is a dearth of technicality. That does not make the ideas expressed incorrect, it simply makes it harder to evaluate them. And if we don't have the time to evaluate them, we damn well shouldn't be holding those ideas in mocking contempt. We should know to be more meta than that.
One is correct and interesting, one is incorrect and uninteresting. And if you don't like that I am assuming the conclusion, you will see why I do not like it when others do the same.
There are two debates we could be having. One of them is about choice of language. Another is about who or what we should let ourselves have un-reflected upon contempt for. The former debate is non-obvious and like I said would involve a lot of consideration from a lot of technical fields, and anyway might be very person-dependent. The second is the one that I think is less interesting but more important. I despise the unreflected-upon contempt that the Less Wrong memeplex has for things it does not at all understand.
Aww! :( I managed to suspend my disbelief about astrology and psychic powers and applications to psychiatry, but then he called Rett's syndrome a severe (yet misspelled) form of autism and I burst out laughing.
My last discussion post was a result of trying to follow Steven's quote, and I managed to salvage an interesting argument from a theist. But it didn't look anything like what you're doing. In particular, many people were able to parse my argument and pick out the correct parts. Perhaps you could try to condense your attempts in a similar manner?
I thought I had, but you seem to have seen worse. Can I have a link? I also request a non-religious example of either an argument that can be reinforced, a response failing to do so, or a response succeeding in doing so.
Noted, thanks.
Let's just call truth "truth" and gibberish "gibberish".
This "another's ontology" thing is usually random nonsense when it sounds like that. Some of it reflects reality, but you probably have those bits yourself already, and the rest should just be cut off clean, perhaps with the head (as Nature is wont to do). Why is understanding "another's ontology" an interesting task? Understand reality instead.
Why not just ignore the apparently nonsensical, even if there is some hope of understanding its laws and fixing it incrementally? It's so much work for little benefit, and there are better alternatives. It's so much work that even understanding your own confusions, big and small, is a challenging task. It seems to me that (re)building from reliable foundation, where it's available, is much more efficient. And where it's not available, you go for the best available understanding, for its simplest aspects that have any chance of pointing to the truth, and keep them at arm's length all pieces apart, lest they congeal into a bottomless bog of despair.
You yourself tend to make use of non-standard ontologies when talking about abstract concepts. I sometimes find it useful to reverse engineer your model so that I can at least understand what caused you to reply to someone's comment in the way that you did. This is an alternative to (or complements) just downvoting. It can potentially result in extracting an insight that is loosely related in thingspace as well as in general being a socially useful skill.
Note that I don't think this applies to what Will is doing here. This is just crazy talk.
Thank you for writing this, I now finally feel like I sort of understand what you've been going on about in recent months (though there are gaps too large for me to judge whether you are right). Please consider translating versions of your arguments refined and worked out enough that you would find comfortable defending.
Unless that would cause you to risk eternal damnation (^_^)
I wrote all that stuff back when I was still new to a lot of the ideas and hadn't really organized them well in my head. I also included a lot of needless metaphysical stuff just to troll people. The general argument for academic theism that I would make these days would be phrased entirely in terms of decision theory and theoretical computer science, and would look significantly more credible than my embarrassingly amateurish arguments from a year or so ago. Separately, I know of better soteriological arguments these days, but I don't take them as seriously and there's no obvious way to make them look credible to LessWrong. If I was setting forth my arguments for real then I would also take a lot more care to separate my theological and eschatological arguments.
Anyway, I'd like to set forth those arguments at some point, but I'm not sure when. I'm afraid that if I put forth an argument then it will be assumed that I supported that argument to the best of my ability, when in reality writing for a diverse audience stresses me out way too much for me to put sustained effort into writing out justifications for any argument.
The "least restrictive, obviously acceptable thing" might be to collect a list of prerequisites in decision theory and CT that would be necessary to understand the main argument. You made a list of this kind (though for different purposes) several years ago, but I still haven't been able to trace from then how you ended up here.
One question.
If all of this was wrong, if there were no Forms other than in our minds and there was no convergence onto a central superoptimizer - would you say our universe was impossible? What difference in experience that we could perceive today disproves your view?
To a large extent my comment was a trap: I deliberately made the majority of my claims purely metaphysical so that when someone came along and said "you're wrong!" I could justifiably claim "I didn't even make any factual claims". You've managed to avoid my trap.
yay!
This read vaguely like it could possibly be interpreted in a non-crazy way if you really tried... until the stuff about jesus.
I mean, whereas the rest of the religous terminology could plausibly be metaphorical or technical, it actually looks as if you're actually non-metaphorically saying that jesus died so we could have a positive singularity.
Please tell me that's not really what you're saying. I would hate to see you go crazy for real. You're one of my favorite posters even if I almost always downvote your posts.
Nah, that's what I was actually saying. Half-jokingly and half-trollingly, but that was indeed what I was saying. And in case it wasn't totally freakin' obvious, I'm trying to steel man Christianity, not describe my own beliefs. I'm, like, crazy in correct ways, not stupid arbitrary ways. Ahem. Heaven wasn't that technical a concept really, just "the output of the acausal economy"---though see my point about "acausal economy" perhaps being a misleading name, it's just an easy way to describe the result of a lot of multiverse-wide acausal "trade". "Apocalypse" is more technical insofar as we can define the idea of a hard takeoff technological singularity, which I'm pretty sure can be done even if we normally stick to qualitative descriptions. (Though qualitative descriptions can be technical of course.) "God" is in some ways more technical but also hard to characterize without risking looking stupid. There's a whole branch of theology called negative theology that only describes God in terms of what He is not. Sounds like a much safer bet to me, but I'm not much of a theologian myself.
Thanks. :) Downvotes don't phase me (though I try to heed them most of the time), but falling on deaf ears kills my motivation. At the very least I hope my comments are a little interesting.
I personally experience a very mild bad feeling for certain posts that do not receive votes, a bad feeling for only downvoted posts, and a good feeling for upvoted posts almost regardless of the number of downvotes I get (within the quantities I have experienced).
I can honestly say it doesn't bother me to be downvoted many times in a post so long as the post got a few upvotes, one might be too few against 20. A goodly number like five would probably suffice against a hundred, twenty against a thousand. Asch conformity.
It doesn't feel at all worse to be downvoted many times than few times, it's because of more than scope insensitivity, as fewer downvotes is the cousin of no votes at all, the other type of negative.
This is not the type of post that would bother me if it went without votes, as it is merely an expression of my opinion.
Consequently, I wish the number of up and down votes were shown, rather than the sum.
I'm probably reading too much in this, but it reminds me of myself. Desperately wanting to go "woo" at something... forcing reality into religious thinking so you can go "woo" at it... muddying thought and language, so that you don't have to notice that the links you draw between things are forced... never stepping back and looking at the causal links... does this poem make you feel religious?
If my diagnosis is correct: stop it! Go into cold analysis mode, and check your model like you're a theorem prover - you don't get to point to feelings of certainty or understanding the universe. It's going to hurt. If your model is actually wrong, it's going to hurt a lot.
And then, once you've swallowed all the bitter pills - there are beautiful things but no core of beauty anywhere, formal systems but no core of eternity, insights but no deep link to all things - and you start looking back and saying "What the hell was I thinking?" - why, then you notice that your fascination and your feelings of understanding and connection and awe were real and beautiful and precious, and that since you still want to go "woo" you can go "woo" at that.
I think perhaps you underestimate the extent to which I really meant it when I said "just for fun". This isn't how I do reasoning when I'm thinking about Friendliness-like problems. When I'm doing that I have the Goedel machine paper sitting in front of me, 15 Wikipedia articles on program semantics open, and I'm trying my hardest to be as precise as possible. That is a different skillset. The skill that I (poorly) attempted to demonstrate is a different one, that of steel-manning another's epistemic position in order to engage with it in meaningful ways, as opposed to assuming that the other's thinking is fuzzy simply because their language is one you're not used to. But I never use that style of syncretic "reasoning" when I'm actually, ya know, thinking. No worries!
But... I said I was used to it, and remembering it being fuzzy!
Compartimentalization is wonderful.
I don't mean to be insulting. I know you're smart. I know you're a good reasoner. I only worry that you might not be using your 1337 reasoning skillz everywhere, which can be extremely bad because wrong beliefs can take root in the areas labeled "separate magisterium" and I've been bitten by it.
(I'm not sure we understand each other, but...) Okay. But I mean, when Leibniz talks about theistic concepts, his reasoning is not very fuzzy. Insofar as smart theists use memes descended from Leibniz---which they do, and they also use memes descended from other very smart people---it becomes necessary that I am able to translate their concepts into concepts that I can understand and use my normal rationality skillz on.
I don't think this is compartmentalization. Compartmentalization as I understand it is when you have two contradictory pieces of information about the world and you keep them separate for whatever reason. I'm talking about two different skills. My actual beliefs stay roughly constant no matter what ontology/language I use to express them. Think of it like Solomonoff induction. The universal machine you choose only changes things by at most a constant. (Admittedly, for humans that constant can be the difference between seeing or not seeing a one step implication, but such matters are tricky and would need their own post. But imagine if I was to try to learn category theory in Russian using a Russian-English dictionary.) And anyway I don't actually think in terms of theism except for when I either want to troll people, understand philosophers, or play around in others' ontologies for kicks.
I am not yet convinced that it isn't misplaced, but I do thank you for your concern.
Very interesting! Thanks. I have a few questions and requests.
What other characteristics do you think God has?
I expected you to say that God was the Carnot engine, not the soul. In terms of perfection I'm guessing you are thinking mind < soul < God, with mind being an approximation of soul, which is an approximation of God. Is that right?
This strikes me as very interesting, and highlights the confusion I have relating the timeful and the timeless perspectives. When do you reason it terms of one instead of the other?
What do you think the chances are that the above describes reality better than the OP implicitly does?
Can you quantify that? Approximately how many people are we talking about here? A thousand? A million? A billion?
I mean it depends a lot on what we mean by "smart people". I'm thinking of theists like a bright philosophy student on the dumber end of smart, C. S. Lewis in the middle, and geniuses like Leibniz on the smarter end. People whose ideas might actually be worth engaging with. E.g. if your friend or someone at a party is a bright philosophy student, it might be worth engaging with them, or if you have some free time it might be a good idea to check out the ideas of some smart Christians like C. S. Lewis, and everyone in the world should take the time to check out the genius of Leibniz considering he was a theist and also the father of computer science. Their ideas are often decently sophisticated, not just something that can be described and discarded as "ontologically fundamental mental states", and it's worth translating their ideas into a decent language where you can understand them a little better. And if it happens to give you okay ideas while doing so, all the better, but that's not really the point.
Can you please explain a bit more what the point is? I'm having trouble figuring out why I would want to try to understand something, if not to get "okay" ideas.
There are many, but unfortunately I only have enough motivation to list a few:
Let me rephrase my question. You decided, on this particular occasion, taking into account opportunity costs, that it was worth trying to understand somebody, for a reason other than to get "okay" ideas. What was that reason?
You mean my original "let's talk about Jesus!" comment? I think I bolded the answer in my original comment: having fun. (If I'd known LW was going to interpret what I wrote as somehow representative of my beliefs then I wouldn't have written it. But I figured it'd just get downvoted to -5 with little controversy, like most of my previous similar posts were.)
Why is it fun? (That is, can you take a guess at why your brain's decided it should be fun? This way of posing the question was also the primary intended meaning for my assertion about countersignaling, although it assumed more introspective access. You gave what looked like an excuse/justification on how in addition to being fun it's also an exercise of a valuable skill, which is a sign of not knowing why you really do stuff.)
Bleh, I think there may be too much equivocation going on, even though your comment is basically correct. My original "insane" comment is not representative of my comments, nor is it a good example of the skill of charitable interpretation.
When I give justifications they do tend to be pretty related to the causes of my actions, though often in weird double-negative ways. Sometimes I do something because I am afraid of the consequences of doing something, in a self-defeating manner. I think a lot of my trying to appear discreditable is a defense mechanism put up because I am afraid of what would happen if I let myself flinch away from the prospect of appearing discreditable, like, afraid of the typical default failure mode where people get an identity as someone who is "reasonable" and then stops signalling and thus stops thinking thoughts that are "unreasonable", where "reason" is only a very loose correlate of sanity. My favorite LW article ever is "Cached Selves", and that has been true for two years now. Also one of my closest friends co-wrote that article, and his thinking has had a huge effect on mine.
I think saying it was "fun" is actually the rationalization, and I knew it was a rationalization, and so I was lying. It's a lot more complex than that. I wrote it more because I was feeling frustrated at what I perceived to be an unjustified level of contempt in the Less Wrong community. (/does more reflection to make sure I'm not making things up.) Okay. Also relatedly part of it was wanting to signal insanity for the reasons outlined above, or reasons similar to the ones outline above in the sense of being afraid of some consequence of not doing something that I feel is principled, or something that I feel would make me a bad person if I didn't attempt to do. Part of it was wanting to signal something like cleverness, which is maybe where some of the "fun" happens to be, though I can only have so much fun when I'm forced to type very quickly. Part of it was trolling for its own sake on top of the aforementioned anti-anti-virtuous rationale, though where the motivation for "trolling for its own sake" came from might be the same as that anti-anti-virtuous rationale but stemming from a more fundamental principle. I would be suspicious if any of these reasons claimed to be the real reason. Actions tend to follow many reasons in conjunction. (/avoids going off on a tangent about the principle of sufficient reason and Leibniz's theodicy for irony's sake.)
It's interesting because others seem to be much more attached to certain kinds of language than I am, and so when they model me they model me as being unhealthily attached to the language of religion or spirituality or something for its own sake, and think that this is dangerous. I think this may be at least partially typical mind fallacy. I am interested in these languages because I like trolling people (and I like trolling people for many reasons as outline above), but personally much prefer the language of algorithmic probability and generally computationalism, which can actually be used precisely to talk about well-defined things. I only talk in terms of theism when I'm upset at people for being contemptuous of theism. Again there are many reasons for these things, often at different levels of abstraction, and it's all mashed together.
Who is "we"? It's your claim. Tell me what you mean or I will think you are equivocating, as at least hundreds of millions of believers are smart in a sense, and in another, those within the top 1% of the top 1% of the top 1% of humans, only a handful may qualify, the majority of which might mean something like what you said.
Your philosophy has just been downchecked in my mind. I read much of his stuff before I could have been biased against him for being Christian, even the Screwtape Letters would have been a worthwhile exercise for an atheist writer, I didn't know he was Christian when I read even those.
The number of parts you have to add to a perpetual motion machine to hide from yourself the fact that it doesn't work is proportional to your intelligence.
The following sentences are meant to be maximally informative given that I am unwilling to put in the necessary effort to actually respond. I apologize that I am unwilling to actually respond.
The general skill that I think is important is the skill you're failing to demonstrate in your comment. It is a skill that I know you have, and would use if you had a confident model of me as a careful thinker. My suggestion is to just use that skill more often, for your sake and my sake and for the sake of group epistemology at all levels of organization. Just charity.
I have a confident model that you are a better thinker than posts like these suggest. But as Wei Dai says, that's not enough: I don't want to see posts that are unpleasant to read (not only for the cryptic obscurity, but also for excessive length and lack of paragraphing), don't have enough valuable content to justify wading through, and turn people off of Less Wrong. Worse, since I know you can do better, these flaws feel like intentional defection with respect to Less Wrong norms of clarity in communication.
In order to be perceived as being a careful thinker by others, you have to send credible signals of being a careful thinker, and avoid sending any contrary signals. You've failed to do so on several recent occasions. How come you don't consider that to be a very important skill?
Do you suggest that people should be epistemically charitable even towards others (and you specifically) who they don't think are careful thinkers? You gave a number of reasons why people might want to do that, but as you admitted, the analysis omits opportunity costs.
Think about it this way: everything you write on LW will probably be read by at least 20 people, and many more for posts. Why should 20+ people spend the effort of deciphering your cryptic thoughts, when you could do it ahead of time or upon request but implicitly or explicitly decide not to? Just for practice? What about those who don't think this particular occasion is the best one for such practice? Notice that this applies even when you are already perceived as a careful thinker. If you're not, then they have even less reason to spend all that effort.
Not in general, no. It's pretty context-sensitive. I think they should do so on Less Wrong where we should aim to have insanely exceptionally high standards of group epistemology. I do think that applies doubly for folk like me who have a decent chunk of karma and have spent a lot of time with a lot of very smart people, but I am not sure how many such people contribute to LW, so it's probably not a worthwhile norm to promote. If LW was somewhat saner perhaps they would, though, so it's unclear.
I am a significantly better rationalist than the LW average and I'm on the verge of leaving which says a whole bunch about my lack of ability to communicate, but also some non-negligible amount about LW's ability to understand humans who don't want to engage in the negative sum signalling game of kow-towing to largely-unreflected-upon local norms. (I'm kind of ranting here and maybe even trolling slightly, it's very possible that my evaluations aren't themselves stable under reflection. (But at least I can recognize that...))
Right, so your comment unfortunately assumes something incorrect about my psychology, i.e. that it is motivationally possible for me to make my contributions to LW clearer. I once put a passive-aggressive apology at the bottom of one of my comments; perhaps if I continue to contribute to LW I'll clean it up and put it at the bottom of every comment.
Point being, this isn't the should world, and I do not have the necessary energy (or writing skills) to pull an Eliezer and communicate across years' worth of inferential distance. Other humans who could teach what I would teach are busy saving the world, as I try to be. That said, I'm 19 years old and am learning skills at a pretty fast rate. A few years from now I'll definitely have a solid grasp of a lot of the technical knowledge that I currently only informally (if mildly skillfully despite that) know how to play with, and I will also have put a lot more effort into learning to write (or learning to bother to want to communicate effectively). If the rationalist community hasn't entirely disintegrated by then, then perhaps I'll be able to actually explain things for once. That'd be nice.
Back to the question: I consider signalling credibility to be an important skill. I also try to be principled. If I did have the necessary motivation I would probably just pull an Eliezer and painstakingly explain every little detail with its own 15 paragraph post. But there is also some chance that I would just say "I refuse to kow tow to people who are unwilling to put the necessary effort into understanding the subtleties of what I am trying to say, and I doubly refuse to kow tow to people who assume I am being irrational in completely obvious ways simply because I am saying something that sounds unreasonable without filling in all of the gaps for them". But not if I'd spent a lot of time really hammering into my head that this isn't the should world, or if I learned to truly empathize with the psychology of the kind of human that thinks that way, which is pretty much every human ever.
(Not having done these things might be the source of my inability to feel motivated to explain things. Despair at how everyone including LW is batshit insane and because of that everyone I love is going to die, maybe? And there's nothing I can do to change that? That sounds vaguely plausible. Hard to motivate oneself in that kind of situation, hard to expect that anything can actually have a substantial impact. Generalized frustration. I just have to remember, this isn't the should world, it is only delusion that would cause me to expect anything else but this, people do what they have incentive and affordance to do, there is no such thing as magical free will, I am surely contemptible in a thousand similar ways, I implicitly endorse a thousand negative sum games because I've implicitly chosen to not reflect on whether or not they're justified, if anyone can be seen as evil then surely I can, because I actually do have the necessary knowledge to do better, if I am to optimize anyone I may as well start with myself... ad nauseum.)
There's some counterfactual world where I could have written this comment so as to be in less violation of local norms of epistemology and communication, and it is expected of me that I acknowledge that a tradeoff has been made which keeps this world from looking like that slightly-more-optimized world, and feel sorry about that necessity, or something, so I do. I consequently apologize.
One of the ways we do this is by telling people when they are writing things that are batshit insane. Because you were. It wasn't deep. It was obfuscated, scattered and generally poor quality thought. You may happen to be personally aweseome. Your recent comments, however, sucked. Not "were truly enlightened but the readers were not able to appreciate it". They just sucked.
Sorry, which comments sucked? The majority of my recent comments have been upvoted, and very few were particularly obfuscated. I had one post that was largely intended to troll people and another comment that was intended to be for the lulz and which I obviously don't think people should be mining for gold. (Which is why I said many times in the comment that it was poor quality syncretism and also bolded that it was just for fun.)
(Tangential: Is "batshit insane" Nesov's vocabulary? It's been mine for awhile.)
I don't think it's possible to understand what you are trying to say, even assuming there is indeed something to understand, you don't give enough information to arrive at a clear interpretation. It's not a matter of unwillingness. And the hypothesis that someone is insane (at least in one compartment) is more plausible than that they are systematically unable/unwilling to communicate clearly insights of unreachable depth, and so only leave cryptic remarks indistinguishable from those generated by the insane. (This remains a possibility, but needs evidence to become more than that. Hindsight or private knowledge don't justify demanding prior beliefs that overly favor the truth.)
There are people who know me in person and thus share background knowledge with me, who are able to understand what I am saying. They are the thinkers I admire most and the people I care most about influencing. I have a hypothesis which may just be wrong that people who are particularly good thinkers would notice that I wasn't just insane-in-a-relevant-way and be able to fill in the gaps that would let them understand what I am saying. I have this hypothesis because I think that I have that skill to a large extent, as I believe do others like Michael Vassar or Peter de Blanc or Steve Rayhawk or generally people who bother to train that skill.
I notice that some people who I think are good thinkers, such as yourself, seem to have a low overall estimate of the worthwhileness of my words. However I have accumulated a fair amount of evidence that you do not have the skill of reading (or choose not to exercise the skill of reading), that is, that you err on the side of calling bullshit when I know for certain that something is not bullshit, and rarely err in the opposite direction. If you had to choose a side to be biased towards then that would of course be the correct one, but it isn't clear that such a choice is necessary to be a strong rationalist, as I think is evidenced by Steve Rayhawk, Peter de Blanc, and Michael Vassar (three major influences on my thinking, in descending order of influence.) Thus I do not consider your low estimate of my rationality to be overwhelming evidence that it is in fact impossible to understand what I am trying to say even without sharing much background knowledge with me. I suspect that e.g. Wei Dai has a lowish estimate of my rationality w.r.t. things he is interested in; my model of Wei Dai has him as less curious than you are about things that I yammer about, so my wild guess at his thoughts on the matter are particularly little evidence compared to your thoughts. I plan on getting more information about this in time.
I am not convinced that this qualifies as "an at all decent description of what the vast majority of smart people mean when they talk about souls."
The point of saying something like "mental states are not ontologically fundamental" is: you are a brain. Your consciousness, your self, requires your brain (or maybe something that emulates it or its functions) to exist. That is what all the evidence tells us.
Yes, I realize that responding to a neo-Platonistic description by talking about the evidence doesn't seem like the most relevant course. But here's the thing: in this universe, the one in which we exist, the "Form" that our minds take is one that includes brains. No brains, no minds, as far as we can tell. Maybe there's some universe where minds can exist without brains - but we don't have any reason to believe that it's ours.
I don't think you're responding to anything I wrote. Nothing you're saying conflicts with anything I said, except you totally misused the word "Form", though you did that on purpose so I guess it's okay. On a different note, I apologize for turning out little discussion into this whole other distracting thing. If you look at my other comments in this thread you'll see some of the points of my actual original argument. My apologies for the big tangent.
I was really responding to what you failed to write, i.e. a relevant response to my comment. The point is that it doesn't matter if you use the words "eternal soul," "ontologically basic mental state," or "minds are Forms"; none of those ideas matches up with reality. The position most strongly supported by the evidence is that minds, mental states, etc. are produced by physical brains interacting with physical phenomena. We dismiss those other ideas because they're unsupported and holding them prevents the realization that you are a brain, and the universe is physical.
It seems like you're arguing that we ought to take ideas seriously simply because people believe them. The fact of someone's belief in an idea is only weak Bayesian evidence by itself, though. What has more weight is why ey believes it, and the empirical evidence just doesn't back up any concept of souls.
Well shucks, why not? If you're going to complain about people not subscribing to your idea of what "soul" should mean (a controversial topic, especially here), I would hope you'd be open to debate. If you only post something even you admit is not strong, why would an opponent bother trying to debate about it? That is - I may be misinterpreting you, but when I say "this was quick; I'm not gonna try to defend it very hard" it means "even if you refute this I won't change my mind, 'cause it isn't my real position."
I think folk perhaps didn't realize that I really meant "just for fun". I didn't realize that Dreaded_Anomaly and I had entered into a more serious type of conversation, and I surely didn't mean to disrespect our discussion by going off on a zany tangent. Unfortunately, though, I think I may have done that. If I was actually going to have a discussion with folk about the importance of not having contempt for ideas that we only fuzzily understand, it would take place in a decent-quality post and not in a joking comment thread. My apologies for betraying social norms.
I didn't! My apologies for taking it so seriously, then.
Is there some way that I could have indicated this better? I asked a clarifying question about your post, and spoke against the topic in question being solely a matter of local norms. What seemed less than serious?