I've already got that book, I have to read it soon :-)
Here is more from Greg Egan:
… I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I’m sure they’d have plenty to tell us that we didn’t yet know — but given enough patience, and a very large notebook, I believe we’d still be able to come to grips with whatever they had to say.
What's really cool about all this is that I just have to wait and see.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I was bringing the example into the presumed finite universe in which we live, where Maximum Utility = The Entire Universe. If we are discussing a finite-quantity problem than infinite quantity is ipso facto ruled out.
I guess I'm asking "Why would a finite-universe necessarily dictate a finite utility score?"
In other words, why can't my utility function be: