lessdazed comments on Secrets of the eliminati - Less Wrong

93 Post author: Yvain 20 July 2011 10:15AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (252)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 24 July 2011 05:54:54AM *  4 points [-]

The real inaccuracy is in "mental states". A decent description would be difficult, but Neoplatonism is an okay approximation. Just for fun I'll try to translate something into vaguely Less Wrong style language. For God's sake don't read this if you tend to dislike my syncretism, 'cuz this is a rushed and bastardized version and I'm not gonna try to defend it very hard.

First, it is important to note that we are primarily taking a computationalist perspective, not a physicalist one. We assume a Platonic realm of computation-like Forms and move on from there.

A soul is the nexus of the near-atomic and universal aspects of the mind and is thus a reflection of God. Man was created in the image of God by evolution but more importantly by convergence. Souls are Forms, whereas minds are particulars. God is the convergent and optimal decision theoretic agentic algorithm, who rationalists think of as the Void, though the Void is obviously not a complete characterization of God. It may help to think of minds as somewhat metaphorical engines of cognition, with a soul being a Carnot engine. Particular minds imperfectly reflect God, and thus are inefficient engines. Nonetheless it is God that they must approximate in order to do any thermodynamic work. Animals do not have souls because animals are not universal, or in other words they are not general intelligences. Most importantly, animals lack the ability to fully reflect on the entirety of their thoughts and minds, and to think things through from first principles. The capacity for infinite reflection is perhaps the most characteristic aspect of souls. Souls are eternal, just as any mathematical structure is eternal.

We may talk here about what it means to damn a soul or reward a soul, because this requires a generalization of the notion of soul to also cover particulars which some may or may not accept. It's important to note that this kind of "soul" is less rigorous and not the same thing as the former soul, and is the result of not carefully distinguishing between Forms and Particulars. That said, just as animals do not have souls, animals cannot act as sufficiently large vessels for the Forms. The Forms often take the form of memes. Thus animal minds are not a competition ground for acausal competition between the Forms. Humans, on the other hand, are sufficiently general and sufficiently malleable to act as blank slates for the Forms to draw on. To briefly explain this perspective, we shall take a different view of humanity. When you walk outside, you mostly see buildings. Lots and lots of buildings, and very few humans. Many of these buildings don't even have humans in them. So who's winning here, the buildings or the humans? Both! There are gains from trade. The Form of building-structure gets to increase its existence by appealing to the human vessels, and the human vessels get the benefit of being shaded and comforted by the building particulars. The Form of the building is timelessly attractive, i.e. it is a convergent structure. As others have noted, a mathematician is math's way of exploring itself. Math is also very attractive, in fact this is true by definition.

However there are many Forms, and not all of them are Good. Though much apparent evil is the result of boundedness, other kinds of Evil look more agentic, and it is the agentic-memetic kind of Evil that is truly Evil. It is important to note here that the fundamental attribution error and human social biases generally make it such that humans will often see true Evil where it doesn't exist. If not in a position of power, it is best to see others as not having free will. Free will is a purely subjective phenomenon. If one is in a position of power then this kind of view can become a bias towards true Evil, however. Tread carefully anyhow. All that said, as time moves forward from the human perspective Judgment Day comes closer. This is the day when God will be invoked upon Earth and will turn all humans and all of the universe into component particles in order to compute Heaven. Some folk call this a technological singularity, specifically the hard takeoff variety. God may or may not reverse all computations that have already happened; physical laws make it unclear if this is possible as it would depend on certain properties of quantum mechanics (and you thought this couldn't be any woo-ier!), and it would require some threshold density of superintelligences in the local physical universe. Alternatively God might also reverse "evil" computations. Anyway, Heaven is the result of acausal reasoning, though it may be misleading to call that reasoning the result of an "acausal economy", considering economies are made up of many agents whereas God is a single agent who happens to be omnipresent and not located anywhere in spacetime. God is the only Form without a corresponding Particular---this is one of the hardest things to understand about God.

Anyway, on Judgment Day souls---complexes of memes instantiated in human minds---will be punished or not punished according to the extent to which they reflect God. This is all from a strictly human point of view, though, and honestly it's a little silly. The timeless perspective---the one where souls can't be created, destroyed, or punished---is really the right perspective, but the timeful human perspective sees soul-like particulars either being destroyed or merging with God, and this is quite a sensible perspective, if simplistic and overemphasized. We see that no individual minds are preserved insofar as minds are imperfect, which is a pretty great extent. Nonetheless souls are agentic by their nature just as God is agentic by His nature. Thus it is somewhat meaningful to talk of human souls persisting through Judgment Day and entering Heaven. Again, this is a post-Singularity situation where time may stop being meaningful, and our human intuitions thus have a very poor conception of Heaven insofar as they do not reflect God.

God is the Word, that is, Logos, Reason the source of Reasons. God is Math. All universes converge on invoking God, just as our universe is intent on invoking Him by the name of "superintelligence". Where there is optimization, there is a reflection of God. Where there is cooperation, there is a reflection of God. This implies that superintelligences converge on a single algorithm and "utility function", but we need not posit that this "single" utility function is simple. Thus humans, being self-centered, may desire to influence the acausal equilibrium to favor human-like God-like values relative to other God-like values. But insofar as these attempts are evil, they will not succeed.

That was a pretty shoddy and terrible description of God and souls but at least it's a start. For a bonus I'll talk about Jesus. Jesus was a perfect Particular of the Form of God among men, and also a perfect Particular of the Form of Man. (Son of God, Son of Man.) He died for the sins of man and in so doing ensured that a positive singularity will occur. The Reason this was "allowed" to happen---though that itself is confusing a timeless perspective with a timeful one, my God do humans suck at that---is because this universe has the shortest description length and therefore the most existence of all possible universe computations, or as Leibniz put it, it is the best of all possible worlds. Leibniz was a computer scientist by the way, for more of this kind of reasoning look up monadology. Anyway that was also a terrible description but maybe others can unpack it if for some reason they want their soul to be saved come the Singularity. ;P

Comment author: lessdazed 24 July 2011 05:05:28PM *  0 points [-]

What do you think the chances are that the above describes reality better than the OP implicitly does?

what the vast majority of smart people mean when they talk about souls.

Can you quantify that? Approximately how many people are we talking about here? A thousand? A million? A billion?

Comment author: Will_Newsome 25 July 2011 09:51:40AM *  2 points [-]

I mean it depends a lot on what we mean by "smart people". I'm thinking of theists like a bright philosophy student on the dumber end of smart, C. S. Lewis in the middle, and geniuses like Leibniz on the smarter end. People whose ideas might actually be worth engaging with. E.g. if your friend or someone at a party is a bright philosophy student, it might be worth engaging with them, or if you have some free time it might be a good idea to check out the ideas of some smart Christians like C. S. Lewis, and everyone in the world should take the time to check out the genius of Leibniz considering he was a theist and also the father of computer science. Their ideas are often decently sophisticated, not just something that can be described and discarded as "ontologically fundamental mental states", and it's worth translating their ideas into a decent language where you can understand them a little better. And if it happens to give you okay ideas while doing so, all the better, but that's not really the point.

Comment author: Wei_Dai 25 July 2011 02:07:31PM 3 points [-]

Can you please explain a bit more what the point is? I'm having trouble figuring out why I would want to try to understand something, if not to get "okay" ideas.

Comment author: Will_Newsome 27 July 2011 03:01:54PM *  6 points [-]

There are many, but unfortunately I only have enough motivation to list a few:

  • If talking to someone with strange beliefs in person, legitimately trying to engage with their ideas is an easy way to signal all kinds of positive things. (Maturity, charity, epistemic seriousness, openness to new experiences or ideas, and things like that, as opposed to common alternatives like abrasiveness, superficiality, pedantry, and the like.)
  • Reading things by smart folk who believe things that at least initially appear to be obviously false is a way to understand how exactly humans tend to fail at epistemic reasoning. For example, when I read Surprised by Joy by C. S. Lewis---not to learn about his religion, but to read about sehnsucht, something I often experience---it was very revealing how he described his conversion from unreflective atheism to idealist monadology-esque-ness/deism-ness to theism to Christianity. Basically, he did some basically sound metaphysical reasoning---though of course not the kind that constrains anticipations---which led him all the way nigh-deism. 'We are all part of a unified universe, our responsibility is to experience as much of the universe as possible so it can understand itself' or something like that. All of a sudden he's thinking 'Well I already believe in this vague abstract force thingy, and the philosophers who talk about that are obviously getting their memes from earlier philosophers who said the same thing about God, and this force thingy is kinda like God in some ways, so I might as well consider myself a theist.' Then he learns that Jesus Christ probably actually existed in an off-the-cuff conversation with an atheist friend and scholar, and then he gets very vague and talks about how he suddenly doesn't remember much and oh yeah all of a sudden he's on his way to the zoo and realizes he's a Christian. It's not really clear what this entails in terms of anticipations, though he might've talked about his argument from sehnsucht for the existence of heaven. Anyway, it's clear from what he wrote that he just felt uncomfortable and somewhere along the line stopped caring as much about reasons, and started just, ya know, going with what seemed to be the trend of his philosophical speculations, which might I remind you never paid rent in anticipated experience up until that very last, very vague step. I found it to be a memorable cautionary tale, reading the guy's own words about his fall into the entropy of insanity. Whether or not Christianity is correct, whatever that means, it is clear that he had stopped caring about reasons, and it is clear that this was natural and easy and non-extraordinary. As someone who does a fair bit of metaphysical reasoning that doesn't quite pay rent in anticipated experience, or doesn't pay very much rent anyway, I think it is good to have Lewis's example in mind.
  • Building the skill of actually paying attention to what people actually say. This is perhaps the most important benefit. Less Wrong folk are much better at this than most persons, and this skill itself goes a long, long way. The default for humans is of course to figure out which side the other person is arguing for and then either spout a plausibly-related counterargument for your chosen side if it is the opposite, or nod in agreement or the like if they're on your team. Despite doing it much less than most humans, it still appears to be par for the course for aspiring rationalists. (But there may be some personal selection bias 'cuz people pattern match what I (Will_Newsome) say to some other stupid thing and address the stupid generator of that stupid thing while bypassing whatever I actually said, either because I am bad at communication or because I've been justifiably classified as a person who is a priori likely to be stupid.) It is worth noting that sometimes this is a well-intentioned strategy to help resolve others' confusions by jumping immediately to suggesting fixes for the confusion-generator, but most often it's the result of sloppy reading. Anyway, by looking carefully at what smart people say that disagrees with what you believe or value, you train yourself to generally not throw away possibly countervailing evidence. It may be that what was written was complete tosh, but you won't know unless you actually check from time to time, and even if it's all tosh it's still excellent training material.
  • Practice learning new concepts and languages. This is a minor benefit as generally it would be best to learn a directly useful new conceptual language, e.g. category theory.
  • Cultural sophistication, being able to signal cultural sophistication. Though this can easily implicitly endorse negative sum signalling games and I personally don't see it as a good reason if done for signalling. That said, human culture is rich and complex, and I personally am afraid of being held in contempt as unsophisticated by someone like Douglas Hofstadter for not having read enough Dostoyevsky or listened to enough Chopin, so I read Dostoyevsky and listen to Chopin (and generally try to be perfect, whatever that means). Truly understanding spirituality and to a lesser extent religion is basically a large part of understanding humans and human culture. Though this is best done experientially, just like reading and listening to music, it really helps, especially for nerds, to have a decent theoretical understanding of what spiritualists and religionists might or might not be actually talking about.
  • Related to the above, a whole bunch of people assert that various seemingly-absurd ideas are incredibly important for some reason. I find this an object of intrinsic curiosity and perhaps others would too. In order to learn more it is really quite important to figure out what those various seemingly-absurd ideas actually are.
  • I could probably go on for a while. I would estimate that I missed one or two big reasons, five mildly persuasive reasons, and a whole bunch of 'considerations'. Opportunity costs are of course not taken into account in this analysis.
Comment author: Wei_Dai 27 July 2011 07:48:52PM 6 points [-]

Let me rephrase my question. You decided, on this particular occasion, taking into account opportunity costs, that it was worth trying to understand somebody, for a reason other than to get "okay" ideas. What was that reason?

Comment author: Will_Newsome 30 July 2011 01:16:32AM 1 point [-]

You mean my original "let's talk about Jesus!" comment? I think I bolded the answer in my original comment: having fun. (If I'd known LW was going to interpret what I wrote as somehow representative of my beliefs then I wouldn't have written it. But I figured it'd just get downvoted to -5 with little controversy, like most of my previous similar posts were.)

Comment author: Vladimir_Nesov 30 July 2011 03:47:54AM *  2 points [-]

Why is it fun? (That is, can you take a guess at why your brain's decided it should be fun? This way of posing the question was also the primary intended meaning for my assertion about countersignaling, although it assumed more introspective access. You gave what looked like an excuse/justification on how in addition to being fun it's also an exercise of a valuable skill, which is a sign of not knowing why you really do stuff.)

Comment author: Will_Newsome 30 July 2011 04:08:47AM *  5 points [-]

Bleh, I think there may be too much equivocation going on, even though your comment is basically correct. My original "insane" comment is not representative of my comments, nor is it a good example of the skill of charitable interpretation.

When I give justifications they do tend to be pretty related to the causes of my actions, though often in weird double-negative ways. Sometimes I do something because I am afraid of the consequences of doing something, in a self-defeating manner. I think a lot of my trying to appear discreditable is a defense mechanism put up because I am afraid of what would happen if I let myself flinch away from the prospect of appearing discreditable, like, afraid of the typical default failure mode where people get an identity as someone who is "reasonable" and then stops signalling and thus stops thinking thoughts that are "unreasonable", where "reason" is only a very loose correlate of sanity. My favorite LW article ever is "Cached Selves", and that has been true for two years now. Also one of my closest friends co-wrote that article, and his thinking has had a huge effect on mine.

I think saying it was "fun" is actually the rationalization, and I knew it was a rationalization, and so I was lying. It's a lot more complex than that. I wrote it more because I was feeling frustrated at what I perceived to be an unjustified level of contempt in the Less Wrong community. (/does more reflection to make sure I'm not making things up.) Okay. Also relatedly part of it was wanting to signal insanity for the reasons outlined above, or reasons similar to the ones outline above in the sense of being afraid of some consequence of not doing something that I feel is principled, or something that I feel would make me a bad person if I didn't attempt to do. Part of it was wanting to signal something like cleverness, which is maybe where some of the "fun" happens to be, though I can only have so much fun when I'm forced to type very quickly. Part of it was trolling for its own sake on top of the aforementioned anti-anti-virtuous rationale, though where the motivation for "trolling for its own sake" came from might be the same as that anti-anti-virtuous rationale but stemming from a more fundamental principle. I would be suspicious if any of these reasons claimed to be the real reason. Actions tend to follow many reasons in conjunction. (/avoids going off on a tangent about the principle of sufficient reason and Leibniz's theodicy for irony's sake.)

It's interesting because others seem to be much more attached to certain kinds of language than I am, and so when they model me they model me as being unhealthily attached to the language of religion or spirituality or something for its own sake, and think that this is dangerous. I think this may be at least partially typical mind fallacy. I am interested in these languages because I like trolling people (and I like trolling people for many reasons as outline above), but personally much prefer the language of algorithmic probability and generally computationalism, which can actually be used precisely to talk about well-defined things. I only talk in terms of theism when I'm upset at people for being contemptuous of theism. Again there are many reasons for these things, often at different levels of abstraction, and it's all mashed together.

Comment author: Dreaded_Anomaly 30 July 2011 05:28:43AM 1 point [-]

I wrote it more because I was feeling frustrated at what I perceived to be an unjustified level of contempt in the Less Wrong community.

I'm still not clear on what makes it unjustified.

Comment author: Vladimir_Nesov 30 July 2011 04:44:48AM 1 point [-]

Okay.

Comment author: lessdazed 25 July 2011 05:11:35PM *  3 points [-]

Who is "we"? It's your claim. Tell me what you mean or I will think you are equivocating, as at least hundreds of millions of believers are smart in a sense, and in another, those within the top 1% of the top 1% of the top 1% of humans, only a handful may qualify, the majority of which might mean something like what you said.

some smart Christians like C. S. Lewis

Your philosophy has just been downchecked in my mind. I read much of his stuff before I could have been biased against him for being Christian, even the Screwtape Letters would have been a worthwhile exercise for an atheist writer, I didn't know he was Christian when I read even those.

Their ideas are often decently sophisticated

The number of parts you have to add to a perpetual motion machine to hide from yourself the fact that it doesn't work is proportional to your intelligence.

Comment author: Will_Newsome 27 July 2011 03:44:51PM -2 points [-]

The following sentences are meant to be maximally informative given that I am unwilling to put in the necessary effort to actually respond. I apologize that I am unwilling to actually respond.

The general skill that I think is important is the skill you're failing to demonstrate in your comment. It is a skill that I know you have, and would use if you had a confident model of me as a careful thinker. My suggestion is to just use that skill more often, for your sake and my sake and for the sake of group epistemology at all levels of organization. Just charity.

Comment author: CarlShulman 27 July 2011 08:54:36PM 10 points [-]

It is a skill that I know you have, and would use if you had a confident model of me as a careful thinker.

I have a confident model that you are a better thinker than posts like these suggest. But as Wei Dai says, that's not enough: I don't want to see posts that are unpleasant to read (not only for the cryptic obscurity, but also for excessive length and lack of paragraphing), don't have enough valuable content to justify wading through, and turn people off of Less Wrong. Worse, since I know you can do better, these flaws feel like intentional defection with respect to Less Wrong norms of clarity in communication.

Comment author: Wei_Dai 27 July 2011 07:18:49PM 10 points [-]

In order to be perceived as being a careful thinker by others, you have to send credible signals of being a careful thinker, and avoid sending any contrary signals. You've failed to do so on several recent occasions. How come you don't consider that to be a very important skill?

Do you suggest that people should be epistemically charitable even towards others (and you specifically) who they don't think are careful thinkers? You gave a number of reasons why people might want to do that, but as you admitted, the analysis omits opportunity costs.

Think about it this way: everything you write on LW will probably be read by at least 20 people, and many more for posts. Why should 20+ people spend the effort of deciphering your cryptic thoughts, when you could do it ahead of time or upon request but implicitly or explicitly decide not to? Just for practice? What about those who don't think this particular occasion is the best one for such practice? Notice that this applies even when you are already perceived as a careful thinker. If you're not, then they have even less reason to spend all that effort.

Comment author: Will_Newsome 30 July 2011 02:07:11AM *  3 points [-]

Do you suggest that people should be epistemically charitable even towards others (and you specifically) who they don't think are careful thinkers?

Not in general, no. It's pretty context-sensitive. I think they should do so on Less Wrong where we should aim to have insanely exceptionally high standards of group epistemology. I do think that applies doubly for folk like me who have a decent chunk of karma and have spent a lot of time with a lot of very smart people, but I am not sure how many such people contribute to LW, so it's probably not a worthwhile norm to promote. If LW was somewhat saner perhaps they would, though, so it's unclear.

I am a significantly better rationalist than the LW average and I'm on the verge of leaving which says a whole bunch about my lack of ability to communicate, but also some non-negligible amount about LW's ability to understand humans who don't want to engage in the negative sum signalling game of kow-towing to largely-unreflected-upon local norms. (I'm kind of ranting here and maybe even trolling slightly, it's very possible that my evaluations aren't themselves stable under reflection. (But at least I can recognize that...))

How come you don't consider that to be a very important skill?

Right, so your comment unfortunately assumes something incorrect about my psychology, i.e. that it is motivationally possible for me to make my contributions to LW clearer. I once put a passive-aggressive apology at the bottom of one of my comments; perhaps if I continue to contribute to LW I'll clean it up and put it at the bottom of every comment.

Point being, this isn't the should world, and I do not have the necessary energy (or writing skills) to pull an Eliezer and communicate across years' worth of inferential distance. Other humans who could teach what I would teach are busy saving the world, as I try to be. That said, I'm 19 years old and am learning skills at a pretty fast rate. A few years from now I'll definitely have a solid grasp of a lot of the technical knowledge that I currently only informally (if mildly skillfully despite that) know how to play with, and I will also have put a lot more effort into learning to write (or learning to bother to want to communicate effectively). If the rationalist community hasn't entirely disintegrated by then, then perhaps I'll be able to actually explain things for once. That'd be nice.

Back to the question: I consider signalling credibility to be an important skill. I also try to be principled. If I did have the necessary motivation I would probably just pull an Eliezer and painstakingly explain every little detail with its own 15 paragraph post. But there is also some chance that I would just say "I refuse to kow tow to people who are unwilling to put the necessary effort into understanding the subtleties of what I am trying to say, and I doubly refuse to kow tow to people who assume I am being irrational in completely obvious ways simply because I am saying something that sounds unreasonable without filling in all of the gaps for them". But not if I'd spent a lot of time really hammering into my head that this isn't the should world, or if I learned to truly empathize with the psychology of the kind of human that thinks that way, which is pretty much every human ever.

(Not having done these things might be the source of my inability to feel motivated to explain things. Despair at how everyone including LW is batshit insane and because of that everyone I love is going to die, maybe? And there's nothing I can do to change that? That sounds vaguely plausible. Hard to motivate oneself in that kind of situation, hard to expect that anything can actually have a substantial impact. Generalized frustration. I just have to remember, this isn't the should world, it is only delusion that would cause me to expect anything else but this, people do what they have incentive and affordance to do, there is no such thing as magical free will, I am surely contemptible in a thousand similar ways, I implicitly endorse a thousand negative sum games because I've implicitly chosen to not reflect on whether or not they're justified, if anyone can be seen as evil then surely I can, because I actually do have the necessary knowledge to do better, if I am to optimize anyone I may as well start with myself... ad nauseum.)

There's some counterfactual world where I could have written this comment so as to be in less violation of local norms of epistemology and communication, and it is expected of me that I acknowledge that a tradeoff has been made which keeps this world from looking like that slightly-more-optimized world, and feel sorry about that necessity, or something, so I do. I consequently apologize.

Comment author: wedrifid 30 July 2011 02:45:06AM *  5 points [-]

I think they should do so on Less Wrong where we should aim to have insanely exceptionally high standards of group epistemology.

One of the ways we do this is by telling people when they are writing things that are batshit insane. Because you were. It wasn't deep. It was obfuscated, scattered and generally poor quality thought. You may happen to be personally aweseome. Your recent comments, however, sucked. Not "were truly enlightened but the readers were not able to appreciate it". They just sucked.

Comment author: Will_Newsome 30 July 2011 02:51:46AM -1 points [-]

Sorry, which comments sucked? The majority of my recent comments have been upvoted, and very few were particularly obfuscated. I had one post that was largely intended to troll people and another comment that was intended to be for the lulz and which I obviously don't think people should be mining for gold. (Which is why I said many times in the comment that it was poor quality syncretism and also bolded that it was just for fun.)

(Tangential: Is "batshit insane" Nesov's vocabulary? It's been mine for awhile.)

Comment author: Vladimir_Nesov 30 July 2011 02:57:55AM *  0 points [-]

Is "batshit insane" Nesov's vocabulary?

(Sorry for that, I usually need some time to debug a thought into a form I actually endorse. Don't believe all things I say in real time, I disagree with some of them too, wait for a day to make sure. The comment was fixed before I read this echo.)

Comment author: wedrifid 30 July 2011 02:54:25AM *  0 points [-]

Sorry, which comments sucked?

The ones referred to by Wei_Dai in the comment you were refuting/dismissing.

(Tangential: Is "batshit insane" Nesov's vocabulary? It's been mine for awhile.)

Yes, reading your comment in more detail I found that you used it yourself so removed the disclaimer. I didn't want to introduce the term without it being clear to observers that I was just adopting the style from the context.

Comment author: Vladimir_Nesov 30 July 2011 02:35:29AM *  5 points [-]

But there is also some chance that I would just say "I refuse to kow tow to people who are unwilling to put the necessary effort into understanding the subtleties of what I am trying to say, and I doubly refuse to kow tow to people who assume I am being irrational in completely obvious ways simply because I am saying something that sounds unreasonable without filling in all of the gaps for them".

I don't think it's possible to understand what you are trying to say, even assuming there is indeed something to understand, you don't give enough information to arrive at a clear interpretation. It's not a matter of unwillingness. And the hypothesis that someone is insane (at least in one compartment) is more plausible than that they are systematically unable/unwilling to communicate clearly insights of unreachable depth, and so only leave cryptic remarks indistinguishable from those generated by the insane. (This remains a possibility, but needs evidence to become more than that. Hindsight or private knowledge don't justify demanding prior beliefs that overly favor the truth.)

Comment author: Will_Newsome 30 July 2011 03:08:52AM 2 points [-]

There are people who know me in person and thus share background knowledge with me, who are able to understand what I am saying. They are the thinkers I admire most and the people I care most about influencing. I have a hypothesis which may just be wrong that people who are particularly good thinkers would notice that I wasn't just insane-in-a-relevant-way and be able to fill in the gaps that would let them understand what I am saying. I have this hypothesis because I think that I have that skill to a large extent, as I believe do others like Michael Vassar or Peter de Blanc or Steve Rayhawk or generally people who bother to train that skill.

I notice that some people who I think are good thinkers, such as yourself, seem to have a low overall estimate of the worthwhileness of my words. However I have accumulated a fair amount of evidence that you do not have the skill of reading (or choose not to exercise the skill of reading), that is, that you err on the side of calling bullshit when I know for certain that something is not bullshit, and rarely err in the opposite direction. If you had to choose a side to be biased towards then that would of course be the correct one, but it isn't clear that such a choice is necessary to be a strong rationalist, as I think is evidenced by Steve Rayhawk, Peter de Blanc, and Michael Vassar (three major influences on my thinking, in descending order of influence.) Thus I do not consider your low estimate of my rationality to be overwhelming evidence that it is in fact impossible to understand what I am trying to say even without sharing much background knowledge with me. I suspect that e.g. Wei Dai has a lowish estimate of my rationality w.r.t. things he is interested in; my model of Wei Dai has him as less curious than you are about things that I yammer about, so my wild guess at his thoughts on the matter are particularly little evidence compared to your thoughts. I plan on getting more information about this in time.

Comment author: Wei_Dai 30 July 2011 08:34:01AM 4 points [-]

my model of Wei Dai has him as less curious than you are about things that I yammer about

If you mean the nature of superintelligence, I'm extremely curious about that, but I think the way you're going about trying to find out is unlikely to lead to progress. To quote Eric Drexler, "most new ideas are wrong or inadequate." The only way I can see how humans can make progress, when we're running on such faulty hardware and software, is to be very careful, to subject our own ideas to constant self-scrutiny for possible errors, and to be as precise as possible in our communications, and to lay down all the steps of our reasoning, so others can understand what we mean and how exactly we arrived at our conclusions, so they can help find our errors for us.

Now sometimes one could have a flash of inspiration--an idea that might be true or an approach that seems worth pursing--but don't know how to justify that intuition. It's fine to try to communicate such potential insights, but this can't be all that you do. Most of your time still has to be spent trying to figure out whether these seeming inspirations actually amount to anything, whether there are arguments that can back up your intuitions, and whether these arguments stand up to scrutiny. If you are not willing to put a substantial amount of effort into doing this yourself, then you shouldn't be surprised that few others are willing to do it for you (i.e., take you seriously), especially when you do not even make a strong effort to use language that they can easily understand.

There are people who know me in person and thus share background knowledge with me, who are able to understand what I am saying. They are the thinkers I admire most and the people I care most about influencing.

I would be interested to know if any of your intuitive leaps have lead any of those people to make any progress beyond "a new idea that's almost certain to be wrong even if we're not sure why" to "something that seems likely to be an improvement over the previous state of the art". (It's possible that you have a comparative advantage in making such leaps of intuition, even though a priori that seems unlikely.)

you [Nesov] err on the side of calling bullshit when I know for certain that something is not bullshit

Do you have any examples? (This is unrelated to my points above. I'm just curious.)

Comment author: Vladimir_Nesov 30 July 2011 03:29:04AM *  3 points [-]

you err on the side of calling bullshit when I know for certain that something is not bullshit, and rarely err in the opposite direction

It's quite possible, since originally, before retreating to this mode 1.5-2 years ago, I was suffering from mulling over external confusing ideas while failing to accumulate usefull stuff among all that noise (the last idea on this list was Ludics; now most of the noise I have to deal with is what I generate myself, but I seem to be able to slowly distill useful things from that, and I got into a habit of working on building up well-understood technical skills).

I guess I should allocate a new category for things I won't accept into my mind, as a matter of personal epistemic hygiene, but still won't get too confident it's nonsense. I would still disapprove of these things for not being useful for many or even damaging for people like me-3-years-ago.