Paul Almond's site has many philosophically deep articles on theoretical rationality along LessWrongish assumptions, including but not limited to some great atheology, an attempt to solve the problem of arbitrary UTM choice, a possible anthropic explanation why space is 3D, a thorough defense of Occam's Razor, a lot of AI theory that I haven't tried to understand, and an attempt to explain what it means for minds to be implemented (related in approach to this and this).
Okay, now for a more substantive comment (ETA see note 1). I read the essay on what it means for a mind to be implemented, and Almonds talks about a "problem" presented by Searle that says, "Can you call a wall a mind (or word processor program) on the grounds that you can find some ismorphism between the molecular motion of the wall, and some more interesting program?" and thus, "Why is the ismorphism to the program somehow less of a valid interpretation than that which we apply to an actual computer running the known program?"
I really don't see what the problem is here. The argument relies on the possibility of finding an isomorphism between an arbitrary "interesting" algorithm, and something completely random. Yes, you can do it: but only by applying an interpreter of such complexity that it is itself the algorithm, and the random process is just background noise.
The reason that we call a PC (or a domino setup ) a "computer" is because its internal dynamics are consistently ismorphic to the abstract calculation procedure the user wants it to do. In a random environment, there is no such consistency, and as time progresses you must keep expanding your interpretation so that it continues to output what WordStar does. Which, again, makes you the algorithm, not the wall's random molecular motions.
(Edit to add: By contrast, a PC's interpreter (the graphics card, monitor, mouse, keyboard, etc.) do not change in complexity or the mapping the perform from the CPU/memory to me.)
Surely, the above differences show how you can meaningfully differentiate between true programs/minds and random processes, yet Almond doesn't mention this possibility (or I don't understand him).
1 By this remark, I was absolutely not meaning to trivialize the other comments here. Rather, at the time I posted this, there were few comments, and I had just made a comment with no substance. The remark compares to my other comment, not to any other commenter.
One picky remark: Paul Almond ascribes this argument to Searle, and indeed it appears in a work of Searle's from 1990; but Hilary Putnam published a clearer and more rigorous presentation of it, two years earlier, in his book "Representation and reality".
(Putnam also demolished the rather silly Goedelian argument against artificial intelligence that's commonly attributed to J R Lucas before Lucas even published it. Oh, and he was one of the key players in solving Hilbert's 10th problem. Quite a clever chap.)