Although I think you're overstating and misapplying your case, Eliezer (like Robin implies, a "cynical" critique of both cynicism and idealism seems to me to yield more fruit than an idealist critique of both), I agree with Richard that cynicism is a poorer epistemological framework than skepticism.
I think it's also worth noting that it's a common play for status to admonish people not to be so cynical, I think because (1) the crowd seems to award higher status to people who perform optimism as a general rule, and (2) there's an element of power alignment, and (if one is powerful) power maintenance to convincing less powerful people not to be cynical about the reasons for power variance in a social group.
"Here's an odd bias I notice among the AI and singularity crowd: a lot of us seem to only plan for science-fictional emergencies, and not for mundane ones like economic collapse. Why is that?"
One hypothesis: because the value in the planning (and this may be rational, if nontransparent) is primarily for entertainment purposes.
"However, if any professor out there wants to let me come in and just do a PhD in analytic philosophy - just write the thesis and defend it - then I have, for my own use, worked out a general and mathematically elegant theory of Newcomblike decision problems. I think it would make a fine PhD thesis, and it is ready to be written - if anyone has the power to let me do things the old-fashioned way."
I think this is a good idea for you. But don't be surprised if finding the right one takes more work than an occasional bleg. And I do recommend getting it at Harvard or the equivalent. And if I'm not mistaken, you may still have to do a bachelors and masters?
Some interesting, useful stuff in this post. Minus the status-cocaine of declaring that you're smarter than Robert Aumann about his performed religious beliefs and the mechanics of his internal mental state. In that area, I think Michael Vassar's model for how nerds interpret the behavior of others is your God. There's probably some 10 year olds that can see through it (look everybody, the emperor has no conception that people can believe one thing and perform another). Unless this is a performance on your part too, and there's shimshammery all the way down!
There's a corallary mystery category which most of you fall into: why are so few smart people fighting, even anonymously, against policy grounded in repugnancy bias that'll likely reduce their persistence odds? Where's the fight against a global ban on reproductive human cloning? Where's the fight to increase legal organ markets? Where's the defense of China's (and other illiberal nations)rights to use prisoners (including political prisoners) for medical experimentation? Until you square aware your own repugnancy bias based inaction, criticisms of that of...
Don't get bored with the small shit. Cancers, heart disease, stroke, safety engineering, suicidal depression, neurodegenerations, improved cryonic tech. In the next few decades I'm probably going to see most of you die from that shit (and that's if I'm lucky enough to persist as an observer), when you could've done a lot more to prevent it, if you didn't get bored so easily of dealing with the basics.
Eliezer it's a good question and a good thought experiment except for the last sentence, which assumes a conservation of us as subjective conscious entities that the anthropic principle doesn't seem to me to endorse.
You can also add into your anthropic principle mix the odds that increasing numbers of experts think we can solve biological aging within our life time, or perhaps that should be called the solipstic principle, which may be more relevant for us as persisting observers.
MZ, I disagree to a limited extent, for reasons I explained on my blog. I think Intrade may have specifically predicted McCain's temporary lead in the electoral college before a reasonable expert could (about 1 week in advance of its occurence). Being able to predict events accurately one week in advance is about as good as our best weather prediction. It's not trivial.
Eliezer, whatever you're doing here with this post, it's not enlightenment. In my opinion you're pretending to understanding that you don't have. It's not to say that you're position is wron...
Unsurprisingly I agree with Carl, especially the tax-farming angle. I think it's unlikely wet-brained humans would be part of a winning coalition that included self-improving human+ level digital intelligences for long. Humorously, because of the whole exponentional nature of this stuff, the timeline may be something like 2025 ---> functional biological immortality, 2030 --> whole brain emulation --> 2030 brain on a nanocomputer ---> 2030 earth transformed into computonium, end of human existence.
Those quotes seem rather weak to me. Especially the last one. Armchair psychology, you're worried about your own propensity towards irrationality, so you seek to master it by focusing on irrationality external to you, as by seeking to wipe it out. Kind of analogous to evangelical christianity. I'm not sure rational heroes and irrational villians in a morality play is as valuable to us trying to build our best models of the world, including of various irrationalities as natural phenomena. Whether we should expend effort to convince people not to engage in various irrationalities is an empirical question, and maybe one that has different answers in each instance.
What do you think of the philosophy faculty of MIT and Cal-Tech? I ask because I suspect the faculty there selects for philosophers that would be most usual to hard scientists and engineers (and for hard science and engineering students).
"I await the proper timing and forum in which to elaborate my skepticism that we should focus on trying to design a God to rule us all. Sure, have a contingency plan in case we actually face that problem, but it seems not the most likely or important case to consider."
I agree with Robin. Although I'm disappointed that he thinks he lacks an adequate forum to pound the podium on this more forcefully.
J Thomas, whether or not foxes or rabbits think about morality seems to me to be the less interesting aspect of Tim Tyler's comments.
As far as can tell this is more about algorithms and persistence. I aspire to value the persistence of my own algorithm as a subjective conscious entity. I can conceive of someone else who values maximizing the persistence odds of any subjective conscious entity that has ever existed above all. A third that values maximizing the persistence odds of any human who has ever lived above all. Eliezer seems to value maximizing the ...
Ben, you write "Do you strive for the condition of perfect, empty, value-less ghost in the machine, just for its own sake...?".
But my previous post clearly answered that question: "I'd sacrifice all of that reproductive fitness signalling (or whatever it is) to maximize my persistence odds as a subjective conscious entity, if that "dilemma" was presented to me."
"Someone sees a slave being whipped, and it doesn't occur to them right away that slavery is wrong. But they go home and think about it, and imagine themselves in the slave's place, and finally think, "No.""
I think lines like this epitomize how messy your approach to understanding human morality as a natural phenomenon is. Richard (the pro), what resources do you recommend I look into to find people taking a more rigorous approach to understanding the phenomenon of human morality (as opposed to promoting a certain type uncritically)?
I think the child on train tracks/orphan in burning building tropes you reference back to prey on bias, rather than seek to overcome it. And I think you've been running from hard questions rather than dealing with them forthrightly (like whether we should give primacy to minimizing horrific outcomes or to promoting social aesthetics like "do not murder children" or minimizing horrific outcomes). To me this sums up to you picking positions for personal status enhancement rather than for solving the challenges we face. I understand why that would b...
Mark, I think you over-identify with whoever controls the nuclear weapons in the US arsenal. I think their existence is a complex phenomenon, and I'm not sure it can be reduced to "I am an American citizen and voter, therefore I exert partial control and ownership of the weapons in the nuclear aresenal".
Beyond that, I think a major source of bias is people who let the status quo and power/hegemony alignment do a lot of their argumentative legwork for them. I think you're doing that here, but it's a much bigger problem warping our models of reality than this instance.
Frelkins, You shifted rather quickly from what I think is the stronger argument against MAD (greater catastrophic risk due to human error and irrationality) to what I think is a weaker argument against MAD (a claim that some states are suicidal). I think you should focus on the stronger argument.
Also, I think the claim that a world without the type of MAD one gets from nukes is a world where all politics is solved through war is, I think, inaccurate. Some politics seems to be solved through war, others don't, both before and after MAD. It may be true that ...
I'm with James Miller and Caledonian on this one, and I want it taken further. Caledonian, I think the cognitive bias is good old repugnancy bias. How I'd like it taken further: I think what we want to avoid is not (1) horrific outcomes due to war from a specific type of technology, nor (2) horrific outcomes due to war generally, but (3) horrific outcomes generally. As such, beyond using nuclear weapons (which I'm not convinced prevents any of the three, though it may), how about greatly increasing the variety of human medical experimentation we engage in,...
"And because we can more persuasively argue, for what we honestly believe, we have evolved an instinct to honestly believe that other people's goals, and our tribe's moral code, truly do imply that they should do things our way for their benefit."
Great post overall, but I'm skeptical of this often-repeated element in OB posts and comments. I'm not sure honest believers always, or even usually, have a persuasion advantage. This reminds me of some of Michael Vassar's criticism of nerds thinking of everyone else as a defective nerd (nerd defined as people who value truth-telling/sincerity over more political/tactful forms of communication.
I haven't gotten through your whole post yet, but the "postmodernist literature professor" jogged my memory about a trend I've noticed in your post. Postmodernists, and perhaps in particular postmodernist literature professors seem to be a recurring foil. What's going on there? Is a way to break out of that analytically? I sense that as a deeper writer and thinker you'll go beyond cartoonish representations of foils, if nothing more to reflect a deeper understanding of things like postmodernist literature professors as natural phenomena. It seems to me to be more a barrier to knowledge and understanding than an accurate summation of something in our reality (postmodernist literature professors).
Caledonian, you make some good posts, but here I think your lates post fall in the category of anti-knowledge. I recommend trying to stay away from heroic narratives and morality plays (Watson, Skinner GOOD, Freud BAD) and easy targets, like those that express the wish-fulfilling belief that the mind mystically survives the death of the body.
Whether the mind does survive the death of the body in a sufficiently large universe/multiverse (with multiple "exact" iterations of us) is a more complicated question, in that black box/"magic" are...
Eliezer, I think you've given ample proof that Watson has written some things as cartoonish as your OP suggests. I don't think this has been shown to be generalizable across all of the behaviorist scientists of his era. Ian Maxwell's description of Behaviorists sounds like a reasonable way for science to be done pre-MRI's, etc. But your criticism, in you OP, of Watson's approach (or at least his rhetoric) hits the bulls eye and is a perfect contribution to the mission of this blog.
Who cares if Caledonian is banned from here? Hopefully he'll post more on my blog as a result. I've never edited or deleted a post from Caledonian or anyone else (except to protect my anonymity). Neither has TGGP to my knowledge. As I've posted before on TGGP's blog, I think there's a hierarchy of blogs, and blogs that ban and delete for something other than stuff that's illegal, can bring liability, or is botspam aren't at the top of the heirarchy.
If no post of Caledonians was ever edited or deleted from here (except perhaps for excessive length), this blog would be just as good. Maybe, even better.
Post what you want to post most. The advice that you should go against your own instincts and pander is bad, in my opinion. The only things you should force yourself to do are: (1) try to post something every day, and (2) try to edit and delete comments as little as possible. I believe the result will be an excellent and authentic blog with the types of readers you want most (and that are most useful to you).
In the last similar thread someone pointed out that we're just talking about increasing existential risk in the tiny zone were we observe (or reasonably extrapolate) each other existing, not the entire universe. It confuses the issue to talk about destruction of the universe.
Really this is all recursive to Joy's "Grey goo" argument. I think what needs to be made explicit is weighing our existential risk if we do or don't engage in a particular activity. And since we're not constrained to binary choices, there's no reason for that to be a starting...
Oxford is a little different than the Wailing Wall, it's one of the world's earliest universities, and its been one of the world's great universities for centuries. Eliezer, you would love Florence. In England and in other old countries, I'm most impressed by ancient pubs. One can see how an important church or castle can remain for centuries. But for a little old pub to eek it out for that long, there's something special about that, IMO.
I recommend being wary of a point that needs to exist as part of a dialectical pair. What's orthogonal to cynicism vs. idealism. What's completely outside the set? What encompasses both? What has elements of both? What subversive idea or analytical framework is muted by discussing cynicism vs. idealism instead? I think these type questions are a good starting point when a dialectic is promoted, in general.