You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

magfrump comments on Greg Egan disses stand-ins for Overcoming Bias, SIAI in new book - Less Wrong Discussion

35 Post author: Kaj_Sotala 07 October 2010 06:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 07 October 2010 09:10:05AM 11 points [-]

I've already got that book, I have to read it soon :-)

Here is more from Greg Egan:

I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I’m sure they’d have plenty to tell us that we didn’t yet know — but given enough patience, and a very large notebook, I believe we’d still be able to come to grips with whatever they had to say.

What's really cool about all this is that I just have to wait and see.

Comment author: magfrump 07 October 2010 05:35:49PM 16 points [-]

apart from responding to external events in real time

The concept of "real time" seems like a BIG DEAL in terms of intelligence, at least to me.

If aliens come into contact with us, it seems unlikely that they'll give us a billion years and a giant notebook to come to grips before they try to trade with/invade/exterminate/impregnate/seed with nanotechnology/etc.

Comment author: Risto_Saarelma 08 October 2010 02:23:16PM 0 points [-]

Can you come up with problem scenarios that don't involve interactions with other intelligent agents that have a significant speed advantage or disadvantage?

Sure, you can eat someone's lunch if you're faster than them, but I'm not sure what this is supposed to tell me about the nature of intelligence.

Comment author: magfrump 08 October 2010 05:20:04PM 4 points [-]

When I said that "real time" seems like a big deal, I didn't mean in terms of the fundamental nature of intelligence; I'm not sure that I even disagree about the whole notebook statement. But given minds of almost exactly the same speed there is huge advantage to things like answering a question first in class, bidding first on a contract, designing and carrying out an experiment fast, etc.

To the point where computation, the one place where we can speed up our thinking, is a gigantic industry that keeps expanding despite paradigm failures and quantum phenomena. People who do things faster are better off in a trade situation, so creating an intelligence that thinks faster would be a huge economic boon.

As for scenarios where speed is necessary that aren't interactive: if a meteor is heading toward your planet, the faster the timescale of your species' mind the more "time" you have to prepare for it. That's the least contrived scenario that I can think of, and it isn't of huge importance, but that was sort of tangential to my point regardless.

Comment author: gwern 08 January 2011 05:53:05PM 0 points [-]

Can you come up with problem scenarios that don't involve interactions with other intelligent agents that have a significant speed advantage or disadvantage?

Existential risks come to mind - even if you ignore the issue of astronomical waste - as setting a lower bound on how stupid lifeforms like us can afford to be.

(If we were some sort of interstellar gas cloud or something which could only be killed by a nearby supernova or collapse of the vacuum or other really rare phenomena, then maybe it wouldn't be so bad to take billions of years to develop in the absence of other optimizers.)