Comment author: Ian_C. 08 November 2008 12:48:38AM 3 points [-]

"For there to be a concept, there has to be a boundary. So what am I recognizing?"

I think you're just recognizing that the alien artifact looks like something that wouldn't occur naturally on Earth, rather than seeing any kind of essence. Because Earth is where we originally made the concept, and we didn't need an essence there, we just divided the things we know we made from the things we know we didn't.

Comment author: Ian_C. 02 November 2008 10:17:37AM 0 points [-]

I think I see where the disconnect was in this conversation. Lanier was accusing general AI people of being religious. Yudkowsky took that as a claim that something he believed was false, and wanted Lanier to say what.

But Lanier wasn't saying anything in particular was false. He was saying that when you tackle these Big Problems, there are necessarily a lot of unknowns, and when you have too many unknowns reason and science are inapplicable. Science and reason work best when you have one unknown and lots of knowns. If you try to bite off too big a chunk at once you end up reasoning in a domain that is now only e.g. 50% fact, and that reminds him of the "reasoning" of religious people.

Knowledge is a big interconnected web, with each fact reinforcing the others. You have to grow it from the edge. And our techniques are design for edge space.

Comment author: Ian_C. 23 October 2008 07:42:00AM 2 points [-]

The ability to become emotionally detached is a useful skill (e.g. if you are being tortured) but when it becomes an automatic reflex to any emotion, it can take all the colour out of life.

Sometimes highly intelligent people are also overwhelmingly sensitive/empathetic so detaching is very tempting. The first few minutes of this video with the genius girl walking around the spaceship shows what it's like to be highly empathetic (Firefly).
http://www.youtube.com/watch?v=MsyuTLYx59g

But also: emotions come from the subconscious, and the subconscious contains that which is done repeatedly on the conscious level. So if you are habitually rational, does that effect your subconscious and therefore your emotions?

I think what happens is, you are so consistent on the conscious level (e.g. the way the our host cross-links all his posts) that the subconscious is also highly consistent. So when it produces an emotion it produces it with the whole of itself, instead of just one part contradicted by another (mixed emotions). Therefore the genius has very strong emotions, which interestingly is the stereotype: the overwrought genius who flys off the handle.

The sheer strength of having a conscious and subconscious in total agreement, and both in turn in agreement with reality, can be overwhelming and, like the girl above ("It's getting very crowded in here!") you just want to shut it off.

In response to Ethical Injunctions
Comment author: Ian_C. 21 October 2008 01:28:46PM 0 points [-]

I agree that there are certain moral rules we should never break. Human beings are not omniscient, so all of our principles have to be principles-in-a-context. In that sense every principle is vulnerable to a black swan, but there are levels of vulnerability. The levels correspond to how wide ranging the abstraction. The more abstract the less vulnerable.

Injunctions about truth are based on the metaphysical fact of identity, which is implied in *every single object* we encounter in our entire lives. So epistemological injunctions are the most invulnerable. The one about not helping the ferry boat captain - well helping him would be an absolute *in normal life*, but war is not normal life. It's a big, ugly, black swan. They should not feel guilty over that poor fellow, because "it's just war." (and I mean that in a deep epistemological sense, not a redneck sense)

Comment author: Ian_C. 15 October 2008 06:01:59PM 0 points [-]

I don't think it's possible that our hardware could trick us in this way (making us doing self-interested things by making them appear moral).

To express the idea "this would be good for the tribe" would require the use of abstract concepts (tribe, good) but abstract concepts/sentences are precisely the things that are observably under our conscious control. What *can* pop up without our willing it are feelings or image associations so the best trickery our hardware could hope for is to make something feel good.

Comment author: Ian_C. 09 October 2008 11:15:46PM 0 points [-]

The meta argument others have mentioned - "Telling the world you let me out is the responsible thing to do," would work on me.

Comment author: Ian_C. 08 October 2008 01:44:46PM 0 points [-]

Re: why rationality can't be learned by rote -

If you introspect on a process of reason, you see that you actually *choose* at each step which path of inquiry to follow next and which to ignore. Each choice takes the argument to the next step, ultimately driving it to completion. Reason is "powered by choice(TM)" which is why it is incoherent to argue rationally for determinism and also why it can't be learned by rote.

Software developers (such as myself) in our more abstract moments can think of reason as simply encoding ones premises as a string of symbols standing for definitions and mechanically applying the rules of deduction (Prolog style). But introspection belies this - it's actually highly creative and messy. Reason is an art not a science.

Comment author: Ian_C. 07 October 2008 04:34:24PM 4 points [-]

Except the universe doesn't care how much backbreaking effort you make, only if you get the cause and effect right. Which is why cultures that emphasize hard work are not overtaking cultures that emphasize reason (Enlightenment cultures). Of course even these cultures must still do some work, that of enacting their cleverly thought out causes.

Comment author: Ian_C. 07 October 2008 06:48:11AM 0 points [-]

"If you are talking about a timescale of decades, than intelligence augmentation does seems like a worthy avenue of investment"

This seems like the way to go to me. It's like "generation ships" in sci-fi. Should we launch ships to distant star systems today, knowing that ships launched 10 years from now will overtake them on the way?

Of course in the case of AI, we don't know what the rate of human enhancement will be, and maybe the star isn't so distant after all.

Comment author: Ian_C. 05 October 2008 03:03:09AM 0 points [-]

I don't want to sign up for cryonics because I'm afraid I will be revived brain-damaged. But maybe others are worried they will have the social status of a freak in that future society.

View more: Prev | Next