I read the anthropic explanation why space is 3D - one of the more-promising-sounding titles. I did not find it terribly convincing.
In response to
Taking Occam Seriously
I do not find the article on 3D space terribly convincing either - and I am the author of it - so I would have to be understanding if you don't. It is generally my policy, though, that my articles reflect how I think of things at the time I wrote them and I don't remove them if my views change - though I might occasionally add notes after. I do think that an anthropic explanation still works for this: I just don't think mine was a particularly good one.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Okay, now for a more substantive comment (ETA see note 1). I read the essay on what it means for a mind to be implemented, and Almonds talks about a "problem" presented by Searle that says, "Can you call a wall a mind (or word processor program) on the grounds that you can find some ismorphism between the molecular motion of the wall, and some more interesting program?" and thus, "Why is the ismorphism to the program somehow less of a valid interpretation than that which we apply to an actual computer running the known program?"
I really don't see what the problem is here. The argument relies on the possibility of finding an isomorphism between an arbitrary "interesting" algorithm, and something completely random. Yes, you can do it: but only by applying an interpreter of such complexity that it is itself the algorithm, and the random process is just background noise.
The reason that we call a PC (or a domino setup ) a "computer" is because its internal dynamics are consistently ismorphic to the abstract calculation procedure the user wants it to do. In a random environment, there is no such consistency, and as time progresses you must keep expanding your interpretation so that it continues to output what WordStar does. Which, again, makes you the algorithm, not the wall's random molecular motions.
(Edit to add: By contrast, a PC's interpreter (the graphics card, monitor, mouse, keyboard, etc.) do not change in complexity or the mapping the perform from the CPU/memory to me.)
Surely, the above differences show how you can meaningfully differentiate between true programs/minds and random processes, yet Almond doesn't mention this possibility (or I don't understand him).
1 By this remark, I was absolutely not meaning to trivialize the other comments here. Rather, at the time I posted this, there were few comments, and I had just made a comment with no substance. The remark compares to my other comment, not to any other commenter.
As the author of this article, I will reply to this, though it is hard to make much of a reply here, though. (I actually got here our of curiosity when I saw the site logs). I am, however, always pleased to discuss issues like this with people. One issue with this reply is that it is not just randomness we have to worry about. If we are basing a computational interpretation on randomness, yes, we may need to make the computational interpretation progressively more extreme, but Searle's famous WordStar running in a wall example is just one example. We may not even have the computational interpretation based on randomness: it could conceivably be based on structure in something else, even though that structure would not be considered to be running the computer program except under a very forced interpretation. Where would we draw the line? Another point - why should it matter if we use a progressively more extreme interpretation? We might, for example, just want to say that a computation ran for 10 seconds, which relies on a fixed intertreptation (if a complex one), and what happens after that may not interest us. Where would we draw the line? Another issue is that the main argument had been about statistical issues with combining computers when considering probability issues - the whole thing had not been based on Searle - who would not take me any more seriously by the way.