My attempt at summary:
Our brains pattern-match transhumanism to religion. Human evolved reaction to religion is to either dismiss it as dangerous, or to profess the beliefs but fail to realize the logical consequences in real life. Thus we can expect that most people will react to transhumanism in one of these two ways. Even wannabe rationalists are deeply irrational.
You should not: 1) expect that other people will become more rational, if you yell at them enough; 2) try to build an AI. Our brains are optimized for winning debates, not achieving correct conclusions. Your AI project will most likely fail; but even if it won't, the resulting AI will most likely be not Friendly.
So, what should you do? No certain answer, only suggestions: 1) Be serious about your rationality, and cooperate with other people who are serious abour rationality. Cooperation among rational people is powerful. Keep your ego in check; instead of winning, try to learn. Don't rely on fictional evidence. 2) Make a lot of money, because it's instrumentally useful. If you don't have a well-paying job, learn for actuarial tests: they overlap with rationality and will allow you to make nice money later.
Be prepared that this will be considered weird even by most transhumanists, so don't expect much social support even within your niche. But it's fun and better than what most smart people do. And there is a small chance this could somehow help you save the world.
You should not: 1) expect that other people will become more rational, if you yell at them enough;
I think the rest of your summary is accurate, but Vassar actually said that you can't use to rational argument to convince people of much of anything. Now that I think about it, carefully judged yelling probably works better than rational argument.
"You can't convince anyone of anything using rational argument" is one of those cached thoughts that makes you sound cool and mature but isn't actually true. Rational argument works a hell of a lot worse than smart people think it does, but it works in certain contexts and with certain people enough of the time that it's worth trying sometimes. Even normal people are swayed by facts from time to time.
Isaac Asimov, history's most prolifi writer and Mensa's honorary president, attempted to formulate a more modest set of ethical precepts for robots and instead produced the blatantly suicidal three laws
The three laws were not intended to be bulletproof, just a starting point. And Asimov knew very well that they had holes in them; most of his robot stories were about holes in the three laws.
The Three Laws of Robotics are normally rendered as regular English words, but in-universe they are defined not by words but by mathematics. Asimov's robots don't have "thou shalt not hurt a human" chiseled into their positronic brain, but instead are built from the ground up to have certain moral precepts, summarized for laypeople as the three laws, so built into their cognition that robots with the three laws taken out or modified don't work right, or at all.
Asimov actually gets the whole idea of making AI ethics being hard more than any other sci-fi author I can think of. although this stuff is mostly in the background since the plain English descriptions of the three laws are good enough for a story, but IIRC The Caves of Steel talks about this, and makes it abundantly clear that the Three Laws are made part of robots on the level of very complicated, very thorough coding - something that loads of futurists and philosopher alike often ignore if they think they've come up with some brilliant schema to create an ethical system, for AI or for humans.
The line where Vassar says that sci-fi authors haven't improved on them looks like it is very probably incorrect. I don't read sci if, but I bet in all the time since then authors have developed and worked on the ideas a fair bit.
Tyler Emerson is a real person, but Robin Hanson has never co-authored a paper with him. Vassar was probably thinking of Are Disagreements Honest? by Robin Hanson and Tyler Cowen.
How should I contact Vassar regarding my willingness to follow his lead regarding whatever projects he deems sensible?
Private message on his LW account could be one option. He currectly works at MetaMed, so offering them your help (if you can contribute) could be another.
Nitpick: Asimov was a member of Mensa on and off, but was highly critical of it, and didn't like Mensans. He was an honorary vice president, not president (according Asimov, anyway.) And he wasn't very happy about it.
Relevantly to this: "Furthermore, I became aware that Mensans, however high their paper IQ might be, were likely to be as irrational as anyone else." (See the book "I.Asimov," pp.379-382.) The vigor of Asimov's distaste for Mensa as a club permeates this essay/chapter.
Nitpick it is, but Asimov deserves a better fate than having a two-sentence bio associate him with Mensa.
Confirmation of the statement that actuaries earn six figures with no formal education necessary and with a good job market:
I'm curious if anybody here has actually done this sort of thing?
How smart do you have to be in order to follow this advice? Are we talking two standard deviations or five?
My guess is that you have to be pretty smart but not astonishingly smart to understand the argument, and likewise for becoming an actuary.
The really rare trait is the ability to take ideas seriously enough to act on them.
bear in mind that apparently only one person has ever thought in terms of something more sophisticated so far,
Who is he referring to here?
Thanks for posting this, it was interesting. Definitely had a retro-feel. I wonder how much would differ if he gave the speech today. Some semi-informed speculation:
In 2004, Michael Vassar gave the following talk about how humans can reduce existential risk, titled Memes and Rational Decisions, to some transhumanists. It is well-written and gives actionable advice, much of which is unfamiliar to the contemporary Less Wrong zeitgeist.