A short reply to the Book of Eliezer and a comment on the Book of Luke.

No one wants to save the world. You must thoroughly research this. Those who think they truly think they want to truly want to save the world, in reality they're actually just horribly afraid of the consequences of not saving the world. And that is a world of difference.

Eliezer, you know that ridiculously strong aversion to lost purposes and sphexishness that you have?1 Sometimes, very rarely, other people have that too. And most often it is a double-negative aversion. I am sure you know as much as very nearly anyone what it feels like to work from the inside of a triple-negative motivation system by default, for fear of being as evil and imperfect as every other human in history, among other less noble fears. You quickly learn to go meta to escape the apparently impossible double-bindsif going meta isn't itself choosing a sidebut by constantly moving vertically you never practice pushing to the left or to the right, or choosing which responsibility to sacrifice in the first place. And even if you could, why would you want to be evil?

And for this rare kind of person, telling them to stop obsessing over prudence or to just try to make marginal contributions, immediately gets pattern-matched to that ages-old adage: "The solution is easy, just shut up and be evil.". Luckily it is this kind of person we can make the most use of, when it comes to the big crunch timeif we're not already in it.


1We do not yet know how to teach this skill, and no one can be a truly aspiring rationalist without it, even if they can still aspire to perfection. That does mean I believe there are like maybe 5 truly aspiring rationalists in this community, a larger set of falsely aspiring rationalists, a further much larger set of of truly aspiring aspiring "rationalists", and a further much much larger set of falsely aspiring aspiring "rationalists". (3, 30, 300, 3000, say.) I don't think anyone thinks about this nearly enough, because no one has any affordanceno affordance to not not-think about itespecially not when they're thinking fuzzy happy thoughts about creating aspiring rationalists or becoming a rationalist.

New to LessWrong?

New Comment
30 comments, sorted by Click to highlight new comments since: Today at 9:18 AM

This reads like a personal journal entry. I can't tell to what degree I'm missing necessary context, to what degree you're being puckish, and to what degree this just isn't communication.

It looks like you feel strongly about something going wrong, here; if you can, I'd appreciate you taking the time to state this comprehensibly.

It looks to me, rather, like Will Newsome continues to think that he can be cleverer than anyone else on the meta level without really understanding any of the relevant object-level topics. I continue to disagree.

Could you say that in English?

Probably not, but I'll try to restate the message and motivation:

"I notice that wanting to do something is psychologically very different from aversion to not doing something. I have observed that attraction to saving far mode people and the like if taken very seriously is often the result of the latter. I observe and assert that the type of mind that does this is a disproportionately important mind to influence with "rationalist" or SingInst memes. This is the type of mind that truly groks Eliezer's aversion to lost purposes. I theorize that this type of mind is sometimes formed by being around an abundance of double binds, though I am unwilling to put forth evidence strongly favoring this hypothesis. I think it is important to make a good impression on that type of mind and to avoid negatively reinforcing the anti-anti-virtuous behaviors associated with that type of mind, especially as it is a type of mind that is generally oversensitive to negative reinforcement and could become completely paralyzed. I notice that we specifically do not know how to create the skill of avoiding lost purposes which also makes it important to avoid negatively influencing those who already happen to have the skill. I have created this post to further the agenda of setting up a culture that doesn't repel and perhaps even attracts this type of mind.

As a related side note, I notice that the skill of avoiding lost purposes is very important and wish to express some distress that no apparent effort has been put into addressing the problem. I assert that most "aspiring rationalists" do not seem to even aspire to attain this fundamental skill of rationality, and thus cannot actually be aspiring to rationality, even if they aspire to aspire to what they think is rationality. I thus implicitly claim that I would be able to tell if they were averse to lost purposes, but am unwilling to give evidence of this. I choose to be deliberately misleading about my confidence in this judgment to provoke interesting people to reply in indignation."

I observe and assert that the type of mind that does this is a disproportionately important mind to influence with "rationalist" or SingInst memes.

From a Singularity perspective, the importance of rationality evangelism is being way overrated. There is still a tendency to mix up rationality and intelligence, as if becoming more rational will produce radically superior problem-solving skills. But if we're talking about how to solve a problem like Friendly AI design, then what you need above all are people with high intelligence and relevant knowledge. "Aversion to lost purposes", whatever that is, might be a trait of talented idealistic personalities who get distressed by dead hopes and organizational dysfunction, but some people learn early that that is normality, and their own progress is all the more streamlined for not fighting these facts of life.

In my opinion, the main source of the morale needed to sustain an effort like FAI research, in the midst of general indifference and incomprehension, is simply a sense among the protagonists that they are capable of solving the problem or of otherwise making a difference, and that derives in turn from a sense of one's own abilities. If the objective is to solve the most difficult problems, and not just to improve the general quality of problem-solving in society, then rationality evangelism is a rather indiscriminate approach.

Agree that rationality evangelism (edit:) might be overrated, the importance is spreading the Friendliness-might-be-important memes far and apparently SingInst is using "rationality" as one of their memetic weapons of choice. I personally am not suggesting this memetic strategy is a well-thought-out one. "Aversion to lost purposes" totally doesn't at all mean getting distressed because this world isn't the should world, it means the thing that Eliezer talks about in his post "Lost Purposes".

How much effort has been put into teaching an aversion to lost purposes? What has been tried and what have the failures looked like?

Moreover, given what's being said here, teaching an aversion may be the wrong tack. I suspect it's more motivating to get strong, positive feedback when your efforts align with your goals. It's hard to state the positive condition clearly; it's far easier to point at instances of lost purposes and disapprove than to point at clear goal-oriented behavior and approve. It might be useful to learn, though.

We must thoroughly research this. :j

Exactly, it's tricky. I don't know if anyone else will find this funny, but here's a conversation I had recently:

Me: "Alright, I think I've decided to make myself even more horribly afraid of the consequences of flinching away from examining lost purposes and not thinking things through from first principles."
Other: "Um um um um so I'm not sure that's a good idea..."
Me: "Why? See, it's possible that it will destroy my motivation system, but the best thinkers I know by far all seem to have this tendency. My only comparative advantage at this point is in thinking well. Therefore..."
Other: "You bastard."

I recognize this mental state! I don't know if that's hilarious or terrifying. :/

This actually got me thinking, though... I'm working on a top level comment now.

Your posts and comments would be much improved by trying to actually communicate, rather than playing around with oblique LW references to produce gibberish not worth the effort of decoding (even for those who understand the underlying ideas), or outright trolling.

I don't like the whole "Book of Eliezer/Book of Luke" bit. And while I do appreciate the veiled Musashi reference, I think it too detracts from the (very important) message of the post.

Honestly, I'm also not really sure why this is a post rather than a reply or even a PM/email.

Thanks for the straightforward critique! Also I am surprised that at least someone thinks the message is very important and I notice that I have more positive affect towards Less Wrong as a result. (The Musashi reference is actually more of a veiled reference to Eliezer's Lost Purpose, the part where he says "(I wish I lived in an era where I could just tell my readers they have to thoroughly research something, without giving insult.)". That says a lot about my style of communication, I suppose...)

To be frank, anyone who doesn't understand that the core of rationality is actually being more effective at making correct predictions is not only not gaining rationality from LW but may actually be becoming dangerous by attaining an increased ability to make clever arguments. I haven't interacted with the community enough to determine whether a significant number of "aspiring rationalists" lack this understanding, but if they do it is absolutely critical that this be rectified.

If you're saying that not many people (anywhere) are really trying all-out to save the world, or to do any one thing, I agree. I don't understand your description of (a particular kind of) avoiding self-knowledge. I don't understand being afraid of not wanting to save the world.

Could you expand on what you mean by the difference between aspiring rationalists and aspiring aspiring "rationalists"? I don't think we have many people who say things like "I aspire to be an aspiring rationalist", either truly or falsely.

Something like that sounds plausible, but wouldn't many of those who were trying to try also falsely claim that they were actually trying? If so, what separates these "aspiring aspirings" from the "false aspirings"?

They don't say that, but it's what they do. (As you noted this is different from falsely aspiring, which is not doing much of anything.) That's my assertion-without-evidence, anyway.

Let me try again: Group 1 understands the Void and chases it, Group 2 understands the Void and doesn't chase it, Group 3 doesn't understand the Void but chases something vaguely like it, Group 4 doesn't understand the Void and doesn't chase something vaguely like it. "Understand" is intentionally vague here and doesn't imply a full understanding, but I assert that there's a somewhat sharp and meaningful discontinuity.

What's the Void? Maybe I don't know about it.

Thanks. So it's a good thing. Something like empiricism or truth and being purposeful and honest in meeting your goals rather than getting distracted.

Why is it called 'the Void'?

When I read the Virtues a long time ago, I thought it meant the concept of 'quality' I read about in the Art of Motorcycle Maintenance. But now I don't suppose such a Platonic concept could possibly have been intended.

This comment moved here.

[-][anonymous]13y00

Hm. I guess I should make this a discussion post, if I want anyone to read it... :/

[This comment is no longer endorsed by its author]Reply

truly aspiring rationalists

Does that mean people for whom rationalism is a near-terminal goal that cannot become a lost purpose? Do you use "rationalism" somewhat like the way Charlie Sheen might use "winning", as "rationalism" is often used here?

If yes and no, then to what end rationalism?

If yes and yes, then you value that for someone who, in relation to various things, wants them, that they have as a cherished thing achieving their own wants? Is people having such a nearly-terminal goal a correspondingly deep value of yours, or is it more instrumental? Either way, is coming to value that one of the smaller changes I could make to turn my values towards consistency (and how much more change than coming to consciously value it would I have to do if it is not emergent from my existing values)? If so, at what level would I be valuing that, presumably the same as you do, no? It isn't enough to have a passing devotion to wanting that, that which I want, I should get it?

If this is unclear or badly off-target, let it indicate the magnitude of my confusion as to what you meant.

And for this rare kind of person, telling them

This comes to mind.

[09:28] Eliezer: if I had to take a real-world action, like, guessing someone's name with a gun to my head
[09:29] Eliezer: if I had to choose it would suddenly become very relevant that I knew Michael was one of the most statistically common names, but couldn't remember for which years it was the most common, and that I knew Michael was more likely to be a male name than a female name
[09:29] Eliezer: if an alien had a gun to its head, telling it "I don't know" at this point would not be helpful
[09:29] Eliezer: because there's a whole lot I know that it doesn't
[09:30] X: ok
[09:33] X: what about a question for which you really don't have any information?
[09:33] X: like something only an alien would know
[09:34] Eliezer: if I have no evidence I use an appropriate Ignorance Prior, which distributes probability evenly across all possibilities, and assigns only a very small amount to any individual possibility because there are so many
[09:35] Eliezer: if the person I'm talking to already knows to use an ignorance prior, I say "I don't know" because we already have the same probability distribution and I have nothing to add to that
[09:35] Eliezer: the ignorance prior tells me my betting odds
[09:35] Eliezer: it governs my choices
[09:35] X: and what if you don't know how to use an ignorance prior
[09:36] X: have never heard of it etc
[09:36] Eliezer: if I'm dealing with someone who doesn't know about ignorance priors, and who is dealing with the problem by making up this huge elaborate hypothesis with lots of moving parts and many places to go wrong, then the truth is that I automatically know s/he's wrong
[09:36] Eliezer: it may not be possible to explain this to them, short of training them from scratch in rationality
[09:36] Eliezer: but it is true
[09:36] Eliezer: and if the person trusts me for a rationalist, it may be both honest and helpful to tell them, "No, that's wrong"

Eliezer obviously wouldn't be telling them to shut up and be evil; he'd be intending to tell the person he'd infer he was talking to that, and if this "rare person" couldn't learn about Eliezer's actual intent by inferring the message Eliezer had intended to communicate to who Eliezer ought to have thought he was talking to, the person would be rarely dense.

So that part of Eliezer's message is not flawed, so I'm not sure why you thought it needed addressing.

This assumes I'm reading this post correctly, something I'm not confident of.

Does that mean people for whom rationalism is a near-terminal goal that cannot become a lost purpose?

Maybe in some way, but not in the way that you interpret it to mean... I emphasize the importance of noticing lost purposes, which is central to both epistemic and instrumental rationality. Elsewhere in this thread I re-wrote the post without the cool links, if you're interested in figuring out what I originally meant. I apologize for the vagueness.

As for your second critique, I'm not claiming that Eliezer's message is particularly flawed, just suggesting an improvement over current norms of which Eliezer's original message could be taken as partially representative, even if it makes perfect sense in context. That is, Eliezer's message isn't really important to the point of the post and can be ignored.

Sunzi said: The art of war is of vital importance to the State. It is a matter of life and death, a road either to safety or to ruin. Hence it is a subject of inquiry which can on no account be neglected.
The art of war, then, is governed by five constant factors, to be taken into account in one's deliberations, when seeking to determine the conditions obtaining in the field. These are: (1) The Moral Law; (2) Heaven; (3) Earth; (4) The Commander; (5) Method and discipline.
The Moral Law causes the people to be in complete accord with their ruler, so that they will follow him regardless of their lives, undismayed by any danger.

The very first factor in the very first chapter of The Art of War is about the importance of synchronous goals between agents and represented. It is instrumental in preserving the state. It is also instrumental in preserving the state (sic).

Even so,

Sun Tzu replied: "Having once received His Majesty's commission to be the general of his forces, there are certain commands of His Majesty which, acting in that capacity, I am unable to accept."
Accordingly, he had the two leaders beheaded, and straightway installed the pair next in order as leaders in their place.

A metaphor.

ridiculously strong aversion

The iron is hot, some feel fear.

just suggesting an improvement

You aren't though.

You're expressing belief in a possible downside of current practice. We can say, unconditionally and flatly, that it is a downside, if real, without it being right to minimize that downside. To your credit, you also argue that effects on the average influenced person are less valuable than is generally thought, which if true would be a step towards indicating a change in policy would be good.

But beyond that, you don't articulate what would be a superior policy, and you have a lot of intermediary conclusions to establish to make a robust criticism.

You aren't though.

Correct, I was imprecise. I'm listing a downside and listing nonobvious considerations that make it more of a downside than might be assumed.

(Apparently posts get moved to Discussion if they get downvoted enough. Cool.)