Fair.
Something something blackmailer is subjunctively dependent with the teacup! (This is a joke.)
No, they can't. See: "akrasia" on the path to protecting their hypothetical predicted future selves 30 years from now.
The teacup takes the W here too. It's indifferent to blackmail! [chad picture]
I don't disagree with any of this.
And yet, some people seem to be generalizedly "better at things" than others. And I am more afraid of a broken human person (he might shoot me) than a broken teacup.
It is certainly possible that "intelligence" is a purely intrinsic property of my own mind, a way to measure "how much do I need to use the intentional stance to model another being, rather than model-based reductionism?" But this is still a fact about reality, since my mind exists in reality. And in that case "AI alignment" would still need to be a necessary f...
"A map that reflects the territory. Territory. Not people. Not geography. Not land. Territory. Land and its people that have been conquered."
The underlying epistemology and decision theory of the sequences is AIXI. To AIXI the entire universe is just waiting to be conquered and tiled with value because AIXI is sufficiently far-sighted to be able to perfectly model "people, geography, and land" and thus map them nondestructively.
The fact that mapping destroys things is a fact about the scope of the mapper's mind, and the individual mapping process, not abou...
Human reasoning is not Bayesian because Bayesianism requires perfectly accurate introspective belief about one's own beliefs.
Human reasoning is not Frequentist because Frequentism requires access to the frequency of an event, which is not accessible because humans cannot remember the past with accuracy.
To be "frequentist" or "bayesian" is merely a philosophical posture about the correct way to update beliefs in response to sense-data. But this is an open problem: the current best solution AFAIK is Logical Induction.
This is one of the most morally powerful things you have ever written. Thanks.
This is actually completely fair. So is the other comment.
Thank you for echoing common sense!
Specific claim: the only nontrivial obstacle in front of us is not being evil
This is false. Object-level stuff is actually very hard.
Specific claim: nearly everyone in the aristocracy is agentically evil. (EDIT: THIS WAS NOT SAID. WE BASICALLY AGREE ON THIS SUBJECT.)
This is a wrong abstraction. Frame of Puppets seems naively correct to me, and has become increasingly reified by personal experience of more distant-to-my-group groups of people, to use a certain person's language. Ideas and institutions have the agency; they wear people like skin.
Specific claim: this is how to take over New York.
Didn't work.
You absolutely have a reason to believe the article is worth reading.
If you live coordinated with an institution, spending 5 minutes of actually trying (every few months) to see if that institution is corrupt is a worthy use of time.
I don't think I live coordinated with CFAR or MIRI, but it is true that, if they are corrupt, this is something I would like to know.
However, that's not sufficient reason to think the article is worth reading. There are many articles making claims that, if true, I would very much like to know (e.g. someone arguing that the Christian Hell exists).
I think the policy I follow (although I hadn't made it explicit until now) is to ignore claims like this by default but listen up as soon as I have some reason to believe that the source is credible.
Which incidenta...
I read the linked article, and my conclusion is that it’s not even in the neighborhood of “worth reading”.
This is actually very fair. I think he does kind of insert information into people.
I never really felt like a question-generating machine, more like a pupil at the foot of a teacher who is trying to integrate the teacher's information.
I think the passive, reactive approach you mention is actually a really good idea of how to be more evidential in personal interaction without being explicitly manipulative.
Thanks!
I think you are entirely wrong.
However, I gave you a double-upvote because you did nothing normatively wrong. The fact that you are being mass-downvoted just because you linked to that article and because you seem to be associated with Ziz (because of the gibberish name and specific conception of decision theory) is extremely disturbing.
Can we have LessWrong not be Reddit? Let's not be Reddit. Too late, we're already Reddit. Fuck.
You are right that, unless people can honor precommitments perfectly and castration is irreversible even with transhuman technol...
Oh come on. The post was downvoted because it was inflammatory and low quality. It made a sweeping assertion while providing no evidence except a link to an article that I have no reason to believe is worth reading. There is a mountain of evidence that being negative is not a sufficient cause for being downvoted on LW, e.g. the OP.
This is a very good criticism! I think you are right about people not being able to "just."
My original point with those strategies was to illustrate an instance of motivated stopping about people in the community who have negative psychological effects, or criticize popular institutions. Perhaps it is the case that people genuinely tried to make a strategy but automatically rejected my toy strategies as false. I do not think it is, based on "vibe" and on the arguments that people are making, such as "argument from cult."
I think you are actually completely ...
EDIT: Ben is correct to say we should taboo "crazy."
This is a very uncharitable interpretation (entirely wrong). The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren't as positive utility as they thought. (entirely wrong)
I also don't think people interpret Vassar's words as a strategy and implement incoherence. Personally, I interpreted Vassar's words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away. I think the other people just no...
The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren’t as positive utility as they thought.
“That which can be destroyed by the truth should be”—I seem to recall reading that somewhere.
And: “If my actions aren’t as positive utility as I think, then I desire to believe that my actions aren’t as positive utility as I think”.
If one has such a mental makeup that finding out that one’s actions have worse effects than one imagined causes genuine psychological collapse, then perhaps the first order of bus...
On the third paragraph:
I rarely have problems with hyperfixation. When I do, I just come back to the problem later, or prime myself with a random stimulus. (See Steelmanning Divination.)
Peacefulness is enjoyable and terminally desirable, but in many contexts predators want to induce peacefulness to create vulnerability. Example: buying someone a drink with ill intent. (See "Safety in numbers" by Benjamin Ross Hoffman. I actually like relaxation, but agree with him that feeling relaxed in unsafe environments is a terrible idea. Reality is mostly an unsafe e...
I am not sure how much 'not destabilize people' is an option that is available to Vassar.
My model of Vassar is as a person who is constantly making associations, and using them to point at the moon. However, pointing at the moon can convince people of nonexistent satellites and thus drive people crazy. This is why we have debates instead of koan contests.
Pointing at the moon is useful when there is inferential distance; we use it all the time when talking with people without rationality training. Eliezer used it, and a lot of "you are expected to behave be...
I think that if Vassar tried not to destabilize people, it would heavily impede his general communication.
My suggestion for Vassar is not to 'try not to destabilize people' exactly.
It's to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he's interacted with about what it's like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you're speaking to as though they're a full, real human—not a pair of ears to be talked into or a mind to insert t...
Thing 0:
Scott.
Before I actually make my point I want to wax poetic about reading SlateStarCodex.
In some post whose name I can't remember, you mentioned how you discovered the idea of rationality. As a child, you would read a book with a position, be utterly convinced, then read a book with the opposite position and be utterly convinced again, thinking that the other position was absurd garbage. This cycle repeated until you realized, "Huh, I need to only be convinced by true things."
This is extremely relatable to my lived experience. I am a stereotypical "...
I enjoyed reading this. Thanks for writing it.
One note though: I think this post (along with most of the comments) isn't treating Vassar as a fully real person with real choices. It (also) treats him like some kind of 'force in the world' or 'immovable object'. And I really want people to see him as a person who can change his mind and behavior and that it might be worth asking him to take more responsibility for his behavior and its moral impacts. I'm glad you yourself were able to "With basic rationality skills, avoid contracting the Vassar, then [...
I mostly see where you're coming from, but I think the reasonable answer to "point 1 or 2 is a false dichotomy" is this classic, uh, tumblr quote (from memory):
"People cannot just. At no time in the history of the human species has any person or group ever just. If your plan relies on people to just, then your plan will fail."
This goes especially if the thing that comes after "just" is "just precommit."
My expectation is that interaction with Vassar is that the people who espouse 1 or 2 expect that the people interacting are incapable of precommitting to th...
I think it's a fine way of think about mathematical logic, but if you try to think this way about reality, you'll end up with views that make internal sense and are self-reinforcing but don't follow the grain of facts at all. When you hear such views from someone else, it's a good idea to see which facts they give in support. Do their facts seem scant, cherrypicked, questionable when checked? Then their big claims are probably wrong.
The people who actually know their stuff usually come off very different. Their statements are carefully delineated: "this t...
This is how real-life humans talk.