Today's post, Against Modal Logics was originally published on 27 August 2008. A summary (taken from the LW wiki):
Unfortunately, very little of philosophy is actually helpful in AI research, for a few reasons.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Dreams of AI Design, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
I had to do a bit of searching, but it seems that Eliezer (or at least Eliezer_2008) considers causal arrows to be more fundamental than computations:
So here's my understanding of Eliezer_2008's guess of how all the reductions would work out: mind reduces to computation which reduces to causal arrows which reduces to some sort of similarity relationship between configurations, and the universe fundamentally is a (timeless) set of configurations and their amplitudes.
Interestingly, Pearl himself doesn't seem nearly as ambitious about how far to push the reduction of "causality" and explains that his theory
which bears almost no resemblance to Eliezer's idea of reducing causality to similarity.
I still don't understand what Barbour's theory actually says, and if it says anything at all. It seems to be one of Eliezer's more bizarre endorsements.