It's the opposite of the lesson I usually try to teach, but in this one case I'll say it: it's not the world that's mad, it's you.
I don't think he is "mad", at least not if you press him enough. A few weeks ago I posted the following comment on one of his Facebook submissions:
Will, this off-topic, I'm curious. What would you do if 1.) any action would be ethically indifferent 2.) expected utility hypothesis was bunk 3.) all that really counted was what you want based on naive introspection?
I'm asking because you (and others) seem to increasingly lose yourself in logical implications of maximizing expected utility and ethical considerations.
Take care that you don't confuse squiggles on paper with reality.
His reply (emphasis mine):
Alexander, I don't think that's a particularly good model of my actual reasoning. The simple arguments I have for thinking about what I think about don't involve Pascalian reasoning or conjunctions of weird beliefs, and when it comes to policy I am one of the most vocal critics on LW of the unfortunate trend where otherwise smart people attempt to implement complicated policies due to the output of some incredibly brittle model, often without even taking into account opportunity costs or even considering any obviously better meta-level policies. That is insanity, and completely unrelated to any of the kinds of thinking that I do.
The reasons for my current obsessions are pretty simple, though it's worth noting that I am intentionally keeping my options very, very open.
Seed AI appears to be very possible to engineer. "Provably"-FAI isn't obviously possible to engineer given potential time constraints. If we could make a seed AI that was reflective enough, for example due strong founding in what Steve Rayhawk wants from a "Creatorless Decision Theory", and we had strong arguments about attractors that such an agent might fall into, and we had reason to believe that it might converge on something like FAI, then there might come a time when we should launch such a seed AI, even without all the proofs---for example due to being in a politically or existentially volatile situation.
Between BigNum-maximizer Goedel machine-like foomers and provably-FAI foomers, there's a long continuum of AIs that are more or less reflective on the source of their utility function and what it means that some things rather than some other things caused that particular utility function to be there rather than some other one. The typical SingInst argument that a given AGI will be some kind of strict literalist with respect to what it thinks is its utility function is simply not very strong. In fact, it even contradicts Omohundro's Basic AI Drives paper, which briefly addresses the topic: "For one thing, it has to make those objectives clear to itself. If its objectives are only implicit in the structure of a complex circuit or program, then future modifications are unlikely to preserve them. Systems will therefore be motivated to reflect on their goals and to make them explicit." Some small amount of reflection would seem to open the door for arbitrarily large amounts of reflection, especially if the AI is simultaneously modifying its decision theory---obviously we'd rather avoid an argument of degree where unchained intuitions are allowed to run amok.
We can make the debate more technical by looking at Goedel machines and program semantics. I have some relevant ideas but perhaps Schmidhueber's talk about some Goedel machine implementations in a few days at AGI2011 will prove enlightening.
I'm already losing steam, so we'll just call that Part One. Part Two and maybe a Part Three will talk about: decision theories upon self-modification; decision theory in context; abstract models of optimization & morality; timeless control and game theory of the big red button; and probably other miscellaneous related ideas.
But after all that I don't really know how to answer your question. Wants... Even if somehow the thousand aversions that are shoulds were no longer supposed to compel me, they'd still be there, and I'd still be motivationally paralyzed, or whatever it is I am. I'd probably do the exact same things I'm doing now: living in Berkeley with my girlfriend, eating good food, regularly visiting some of the coolest people on Earth to talk about some of the most interesting ideas in all of history. All of that sounds pretty optimal as far as living on a budget of zero dollars goes. If the aversions were lifted, but I was still me, then I haven't a good idea what I'd do. I'd be happy to immerse myself in the visual arts community, perhaps, or if I thought I could be brilliant I'd revolutionize music cognition and write by far the best artificial composer algorithms. I'd go to various excellent universities for a year or two, and if somehow I found an easy way to make money along the way, e.g. with occasional programming jobs, then I'd frequently travel to Europe and then Asia. I imagine I'd spent very many months in Germany, especially Bavaria. Walking along green mountains or resting under trees in meadow orchards, ideally with a MacBook Pro and a drawing tablet handy. I'd do much meditation and probably progress very quickly, and at some point I expect I'd develop a sort of self-refuge. But I don't know, I'm just saying things that sound nice as if can't have, and I may very well end up doing most of them no matter what future I lead.
It seems to me that he's still with the rest of humanity when it comes to what he is doing on a daily basis and his underlying desires.
Belatedly.
"For one thing, it has to make those objectives clear to itself. If its objectives are only implicit in the structure of a complex circuit or program, then future modifications are unlikely to preserve them. Systems will therefore be motivated"
Hold on. Motivated by what? If its objectives are only implicit in the structure, then why would these objectives include their self-preservation?
Warning: sappy personal anecdotes ahead! See also Eliezer's Coming of Age story, SarahC's Reflections on rationality a year out, and Alicorn's Polyhacking.
On January 11, 2007, at age 21, I finally whispered to myself: There is no God.
I felt the world collapse beneath me. I'd been raised to believe that God was necessary for meaning, morality, and purpose. My skin felt cold and my tongue felt like cardboard. This was the beginning of the darkest part of my life, but the seed of my later happiness.
I grew up in Cambridge, Minnesota — a town of 5,000 people and 22 Christian churches (at the time). My father was (and still is) pastor of a small church. My mother volunteered to support Christian missionaries around the world.
I went to church and Bible study every week. I prayed often and earnestly. For 12 years I attended a Christian school that taught Bible classes and creationism. I played in worship bands. As a teenager I made trips to China and England to tell the godless heathens there about Jesus. I witnessed miraculous healings unexplained by medical science.
And I felt the presence of God. Sometimes I would tingle and sweat with the Holy Spirit. Other times I felt led by God to give money to a certain cause, or to pay someone a specific compliment, or to walk to the cross at the front of my church and bow before it during a worship service.
Around age 19 I got depressed. But then I read Dallas Willard’s The Divine Conspiracy, a manual for how to fall in love with God so that following his ways is not a burden but a natural and painless product of loving God. And one day I saw a leaf twirling in the wind and it was so beautiful — like the twirling plastic bag in American Beauty — that I had an epiphany. I realized that everything in nature was a gift from God to me. Grass, lakes, trees, sunsets — all these were gifts of beauty from my Savior to me. That's how I fell in love with God, and he delivered me from my depression.
I moved to Minneapolis for college and was attracted to a Christian group led by Mark van Steenwyk. Mark’s small group of well-educated Jesus-followers are 'missional' Christians: they think that loving and serving others in the way of Jesus is more important than doctrinal truth. That resonated with me, and we lived it out with the poor immigrants of Minneapolis.
Doubt
By this time I had little interest in church structure or doctrinal disputes. I just wanted to be like Jesus to a lost and hurting world. So I decided I should try to find out who Jesus actually was. I began to study the Historical Jesus.
What I learned, even when reading Christian scholars, shocked me. The gospels were written decades after Jesus' death, by non-eyewitnesses. They are riddled with contradictions, legends, and known lies. Jesus and Paul disagreed on many core issues. And how could I accept miracle claims about Jesus when I outright rejected other ancient miracle claims as superstitious nonsense?
These discoveries scared me. It was not what I had wanted to learn. But now I had to know the truth. I studied the Historical Jesus, the history of Christianity, the Bible, theology, and the philosophy of religion. Almost everything I read — even the books written by conservative Christians — gave me more reason to doubt, not less. What preachers had taught me from the pulpit was not what they had learned in seminary. My discovery of the difference had just the effect on me that conservative Bible scholar Daniel B. Wallace predicted:
I started to panic. I felt like my best friend — my source of purpose and happiness and comfort — was dying. And worse, I was killing him. If only I could have faith! If only I could unlearn all these things and just believe. I cried out with the words from Mark 9:24, "Lord, help my unbelief!"
I tried. For every atheist book I read, I read five books by the very best Christian philosophers. But the atheists made plain, simple sense, and the Christian philosophers were lost in a fog of big words that tried to hide the weakness of their arguments.
I did everything I could to keep my faith. But I couldn’t do it. I couldn’t force myself to believe what I knew wasn’t true. So I finally let myself whisper the horrifying truth out loud: There is no God.
I told my dad, and he said I had been led astray because I was arrogant to think I could get to truth by studying — I was "relying too much on my own strength." Humbled and encouraged, I started a new quest to find God. I wrote on my blog:
It didn’t last. Every time I reached out for some reason — any reason — to believe, God simply wasn’t there. I tried to believe despite the evidence, but I couldn’t believe a lie. Not anymore.
No matter how much I missed him, I couldn’t bring Jesus back to life.
New Joy and Purpose
Eventually I realized that millions of people have lived lives of incredible meaning, morality, and happiness without gods. I soon realized I could be more happy and moral without God than I ever was with him.
In many ways, I regret wasting more than 20 years of my life on Christianity, but there are a few things of value I took from my life as an evangelical Christian. I know what it’s like to be a true believer. I know what it’s like to fall in love with God and serve him with all my heart. I know what’s it like to experience his presence. I know what it’s like to isolate one part of my life from reason or evidence, and I know what it’s like to think that is a virtue. I know what it’s like to be confused by the Trinity, the failure of prayers, or Biblical contradictions but to genuinely embrace them as the mystery of God. I know what it’s like to believe God is so far beyond human reason that we can’t understand him, but at the same time to fiercely believe I know the details of how he wants us to behave.
I can talk to believers with understanding. I've experienced God the same way they have.
Perhaps more important, I have a visceral knowledge that I can experience something personally, and be confident of it, and be completely wrong about it. I also have a gut understanding of how wonderful it can be to just say "oops" already and change your mind.
I suspect this is why it was so easy for me, a bit later, to quickly change my mind about free will, about metaethics, about political libertarianism, and about many other things. It was also why I became so interested in the cognitive science of how our beliefs can get so screwy, which eventually led me to Less Wrong, where I finally encountered that famous paragraph by I.J. Good:
I remember reading that paragraph and immediately thinking something like: Woah. Umm... yeah... woah. That... yeah, that's probably true. But that's crazy because... that changes fricking everything.
So I thought about it for a week, and looked up the counterarguments, and concluded that given my current understanding, an intelligence explosion was nearly inevitable (conditional on a basic continued progress of science) and that everything else I could spend my life working on was trivial by comparison.
So I mostly stopped blogging about philosophy of religion, read through all of Less Wrong, studied more cognitive science and AI, quit my job in L.A., and moved to Berkeley to become a visiting fellow with Singularity Institute.
The Level Above My Own
My move to Berkeley was a bit like the common tale of the smartest kid in a small town going to Harvard and finding out that he's no longer the smartest person in the room. In L.A., I didn't know anyone as devoted as I was to applying the cognitive science of rationality and cognitive biases to my thinking habits (at least, not until I attended a few Less Wrong meetups shortly before moving to Berkeley). But in Berkeley, I suddenly found myself among the least mature rationalists in my social world.
There is a large and noticeable difference between my level of rationality and the level of Eliezer Yudkowsky, Carl Shulman, Anna Salamon, and several others. Every week I learn new rationality techniques. Friends help me uncover cached beliefs about economics, politics, and utilitarianism. I've begun to use the language of anti-rationalization and Bayesian updates in everyday conversation. In L.A. I had become complacent because my level of rationality looked relatively impressive to me. Now I can see how far above my level humans can go.
I still have a lot to learn, and many habits to improve. Living in a community with rationalist norms is a great way to do those things. But a 4-year journey from evangelical Christian missionary to Singularity Institute researcher writing about rationality and Friendly AI is... not too shabby, I suppose.
And that's why I'm glad some people are writing about atheism and the basics of rationality. Without them, I'd probably still be living for Jesus.