Making My Peace with Belief
I grew up in an atheistic household.
Almost needless to say, I was relatively hostile towards religion for most of my early life. A few things changed that.
First, the apology of a pastor. A friend of mine was proselytizing at me, and apparently discussed it with his pastor; the pastor apologized to my parents, and explained to my friend he shouldn't be trying to convert people. My friend apologized to me after considering the matter. We stayed friends for a little while afterwards, although I left that school, and we lost contact.
I think that was around the time that I realized that religion is, in addition to being a belief system, a way of life, and not necessarily a bad one.
The next was actually South Park's Mormonism episode, which pointed out that a belief system could be desirable on the merits of the way of life it represented, even if the beliefs themselves are stupid. This tied into Douglas Adam's comment on Feng Shui, that "...if you disregard for a moment the explanation that's actually offered for it, it may be there is something interesting going on" - which is to say, the explanation for the belief is not necessarily the -reason- for the belief, and that stupid beliefs may actually have something useful to offer - which then requires us to ask whether the beliefs are, in fact, stupid.
Which is to say, beliefs may be epistemically irrational while being instrumentally rational.
The next peace I made with belief actually came from quantum physics, and reading about how there were several disparate and apparently contradictory mathematical systems, which all predicted the same thing. It later transpired that they could all be generalized into the same mathematical system, but I hadn't read that far before the isomorphic nature of truth occurred to me; you can have multiple contradictory interpretations of the same evidence that all predict the same thing.
Up to this point, however, I still regarded beliefs as irrational, at least on an epistemological basis.
The next peace came from experiences living in a house that would have convinced most people that ghosts are real, which I have previously written about here. I think there are probably good explanations for every individual experience even if I don't know them, but am still somewhat flummoxed by the fact that almost all the bizarre experiences of my life all revolve around the same physical location. I don't know if I would accept money to live in that house again, which I guess means that I wouldn't put money on the bet that there wasn't something fundamentally odd about the house itself - a quality of the house which I think the term "haunted" accurately conveys, even if its implications are incorrect.
If an AI in a first person shooter dies every time it walks into a green room, and experiences great disutility for death, how many times must it walk into a green room before it decides not to do that anymore? I'm reasonably confident on a rational level that there was nothing inherently unnatural about that house, nothing beyond explanation, but I still won't "walk into the green room."
That was the point at which I concluded that beliefs can be -rational-. Disregard for a moment the explanation that's actually offered for them, and just accept the notion that there may be something interesting going on underneath the surface.
If we were to hold scientific beliefs to the same standard we hold religious beliefs - holding the explanation responsible rather than the predictions - scientific beliefs really don't come off looking that good. The sun isn't the center of the universe; some have called this theory "less wrong" than an earth-centric model of the universe, but that's because the -predictions- are better; the explanation itself is still completely, 100% wrong.
Likewise, if we hold religious beliefs to the same standard we hold scientific beliefs - holding the predictions responsible rather than the explanations - religious beliefs might just come off better than we'd expect.
Systems Theory Terms
Below are some notes that I took while trying to understanding what exactly Systems theory is all about.
Solstice 2015: What Memes May Come? (Part I)
Winter is coming, and so is Solstice season. There'll be large rationality-centric-or-adjaecent events in NYC, the Bay Area, and Seattle (and possibly other places - if you're interested in running a Solstice event or learning what that involves, send me a PM). In NYC, there'll be a general megameetup throughout the weekend, for people who want to stay through Sunday afternoon, and if you're interested in shared housing you can fill out this form.
The NYC Solstice isn't running a kickstarter this year, but I'll need to pay for the venue by November 19th ($6125). So if you are planning on coming it's helpful to purchase tickets sooner rather than later. (Or preorder the next album or 2016 Book of Traditions, if you can't attend but want to support the event).
-
I've been thinking for the past couple years about the Solstice as a memetic payload.
The Secular Solstice is a (largely Less Wrong inspired) winter holiday, celebrating how humanity faced the darkest season and transformed it into a festival of light. It celebrates science and civilization. It honors the past, revels in the present and promises to carry our torch forward into the future.
For the first 2-3 years, I had a fair amount of influences over the Solstices held in Boston and San Francisco, as well as the one I run in NYC. Even then, the holiday has evolved in ways I didn't quite predict. This has happened both because different communities took them in somewhat different directions, and because (even in the events I run myself), factors come into play that shaped it. Which musicians are available to perform, and how does their stage presence affect the event? Which people from which communities will want to attend, and how will their energy affect things? Which jokes will they laugh at? What will they find poignant?
On top of that, I'm deliberately trying to spread the Solstice to a larger audience. Within a couple years, if I succeed, more of the Solstice will be outside of my control than within it.
Is it possible to steer a cultural artifact into the future, even after you let go of the reins? How? Would you want to?
In this post, I lay out my current thoughts on this matter. I am interested in feedback, collaboration and criticism.
Lessons from History?
(Epistemic status: I have not really fact checked this. I wouldn't be surprised if the example turned out to be false, but I think it illustrates an interesting point regardless of whether it's true)
Last year after Solstice, I was speaking with a rationalist friend with a Jewish background. He made an observation. I lack the historical background to know if this is exactly accurate (feel free to weigh in on the comments), but his notion was as follows:
Judaism has influenced the world in various direct ways. But a huge portion of its influence (perhaps the majority) has been indirectly through Christianity. Christianity began with a few ideas it took from Judaism that were relatively rare. Monotheism is one example. The notion that you can turn to the Bible for historical and theological truth is another.
But buried in that second point is something perhaps more important: religious truth is not found in the words of your tribal leaders and priests. It's found in a book. The book contains the facts-of-the-matter. And while you can argue cleverly about the book's contents, you can't disregard it entirely.
Empiricists may get extremely frustrated with creationists, for refusing to look outside their book for answers (instead of the natural world). But there was a point where the fact of the matter lay entirely in "what the priests/ruler said" as opposed to "what the book said".
In this view, Judaism's primary memetic success is in helping to seed the idea of scholarship, and a culture of argument and discussion.
I suspect this story is simplified, but these two points seem meaningful: a memeplex's greatest impact may be indirect, and may not have much to do with the attributes that are most salient on first glance to a layman.
Simplicity
So far, I've deliberately encouraged people to experiment with the Solstice. Real rituals evolve in the wild, and adapt to the needs of their community. And a major risk of ritual is that it becomes ossified, turning either hollow or dangerous. But if a ritual is designed to be mutable, what gives it it's identity? What separates a Secular Solstice from a generic humanist winter holiday?
The simplest, most salient and most fun aspects of a ritual will probably spread the fastest and farthest. If I had to sum up the Solstice in nine words, they would be:
Light. Darkness. Light.
Past. Present. Future.
Humanity. Science. Civilization.
I suspect that without any special effort on my part (assuming I keep promoting the event but don't put special effort into steering its direction), those 9 pieces would remain a focus of the event, even if groups I never talk to adopt it for themselves.
The most iconic image of the Solstice is the Candelit story. At the apex of the event, when all lights but a single candle have been extinguished, somebody tells a story that feels personal, visceral. It reminds us that this world can be unfair, but that we are not alone, and we have each other. And then the candle is blown out, and we stand in the absolute darkness together.
If any piece of the Solstice survives, it'll be that moment.
If that were all that survived, I think that'd be valuable. But it'd also be leaving 90%+ of the potential value of the Solstice on the table.
Complex Value
There are several pieces of the Solstice that are subtle and important. There are also pieces of it that currently exist that should probably be tapered down, or adjusted to become more useful. Each of them warrants a fairly comprehensive post of its own. A rough overview of topics to explore:
Atheism.
Rationality.
Death.
Humanism.
Transhumanism.
Existential Risk.
The Here and Now.
The Distant Future.
My thoughts about each of these are fairly complex. In the coming weeks I'll dive into each of them. The next post, discussing Atheism, Rationality and Death, is here.
[link] New essay summarizing some of my latest thoughts on AI safety
New essay summarizing some of my latest thoughts on AI safety, ~3500 words. I explain why I think that some of the thought experiments that have previously been used to illustrate the dangers of AI are flawed and should be used very cautiously, why I'm less worried about the dangers of AI than I used to be, and what are some of the remaining reasons for why I do continue to be somewhat worried.
Backcover celebrity endorsement: "Thanks, Kaj, for a very nice write-up. It feels good to be discussing actually meaningful issues regarding AI safety. This is a big contrast to discussions I've had in the past with MIRI folks on AI safety, wherein they have generally tried to direct the conversation toward bizarre, pointless irrelevancies like "the values that would be held by a randomly selected mind", or "AIs with superhuman intelligence making retarded judgments" (like tiling the universe with paperclips to make humans happy), and so forth.... Now OTOH, we are actually discussing things of some potential practical meaning ;p ..." -- Ben Goertzel
The Future of Humanity Institute is hiring!
FHI is accepting applications for a two-year position as a full-time Research Project Manager. Responsibilities will include coordinating, monitoring, and developing FHI’s activities, seeking funding, organizing workshops and conferences, and effectively communicating FHI’s research. The Research Program Manager will also be expected to work in collaboration with Professor Nick Bostrom, and other researchers, to advance their research agendas, and will additionally be expected to produce reports for government, industry, and other relevant organizations.
Applicants will be familiar with existing research and literature in the field and have excellent communication skills, including the ability to write for publication. He or she will have experience of independently managing a research project and of contributing to large policy-relevant reports. Previous professional experience working for non-profit organisations, experience with effectiv altruism, and a network in the relevant fields associated with existential risk may be an advantage, but are not essential.
To apply please go to https://www.recruit.ox.ac.uk and enter vacancy #124775 (it is also possible to find the job by searching choosing “Philosophy Faculty” from the department options). The deadline is noon UK time on 29 August. To stay up to date on job opportunities at the Future of Humanity Institute, please sign up for updates on our vacancies newsletter at https://www.fhi.ox.ac.uk/vacancies/.
The Value of Those in Effective Altruism
Summary/TL;DR: this piece offers Fermi Estimates of the value of those in EA, focusing on the distinctions between typical EA members and dedicated members (defined below). These estimates suggest that, compared to the current movement baseline, we should prioritize increasing the number of “typical” EA members and getting more non-EA people to behave like typical EA members, rather than getting typical EAs to become dedicated ones.
[Acknowledgments: Thanks to Tom Ash, Jon Behar, Ryan Carey, Denis Drescher, Michael Dickens, Stefan Schubert, Claire Zabel, Owen Cotton-Barratt, Ozzie Gooen, Linchuan Zheng, Chris Watkins, Julia Wise, Kyle Bogosian, Max Chapnick, Kaj Sotaja, Taryn East, Kathy Forth, Scott Weathers, Hunter Glenn, Alfredo Parra, William Kiely, Jay Quigley, and others who prefer to remain anonymous for looking at various draft versions of this post. Thanks to their feedback, the post underwent heavy revisions. Any remaining oversights, as well as all opinions expressed, are my responsibility.]
This article is a follow-up to "Celebrating All Who Are In Effective Altruism"
[link] "The Happiness Code" - New York Times on CFAR
http://www.nytimes.com/2016/01/17/magazine/the-happiness-code.html
Long. Mostly quite positive, though does spend a little while rolling its eyes at the Eliezer/MIRI connection and the craziness of taking things like cryonics and polyamory seriously.
A toy model of the treacherous turn
Jaan Tallinn has suggested creating a toy model of the various common AI arguments, so that they can be analysed without loaded concepts like "autonomy", "consciousness", or "intentionality". Here a simple attempt for the "treacherous turn"; posted here for comments and suggestions.
Meet agent L. This agent is a reinforcement-based agent, rewarded/motivated by hearts (and some small time penalty each turn it doesn't get a heart):

FHI is hiring researchers!
The Future of Humanity Institute at the University of Oxford invites applications for four research positions. We seek outstanding applicants with backgrounds that could include computer science, mathematics, economics, technology policy, and/or philosophy.
PSA: even if you don't usually read Main, there have been several worthwhile posts there recently
A lot of people have said that they never look at Main, only Discussion. And indeed, LW's Google Analytics stats say that Main only gets one-third of the views that Discussion does.
Because of this, I thought that I'd point out that December has been an unusually lively month for Main, with several high-quality posts that you may be interested in reading out if you haven't already:
- LessWrong 2.0 (Vaniver): discussion about what to do with LW in order to stop its decline. Different from previous discussions in that this time, MIRI and TrikeApps have agreed to make the changes that result from the discussion.
- Why startup founders have mood swings (and why they may have uses) (AnnaSalamon and Duncan_Sabien): what the title says
- Results of a One-Year Longitudinal Study of CFAR Alumni (Unnamed): CFAR has studied the impact of their workshops on people a year after taking the workshops, and have promising results.
- The art of grieving well (Valentine): a beautiful and important post on the function of grief, and how to make the best out of it. A post intended for a sequence on "the sub-art of subconsciously seeking out and eliminating ugh fields and also eliminating the inclination to form them in the first place".
- European Community Weekend 2016 (nino): ECW2016 is confirmed to happen!
- Why CFAR? The view from 2015 (PeteMichaud): a report on what CFAR has achieved in 2015, how it has changed, and what it will do in the future.
View more: Next
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)