ialdabaoth comments on Using existing Strong AIs as case studies - Less Wrong

-6 Post author: ialdabaoth 16 October 2012 10:59PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread. Show more comments above.

Comment author: ialdabaoth 17 October 2012 01:07:43AM 0 points [-]

I would expect a bureaucracy to be capable of self-reflection and self-identity that exists independent of its constituent (human) decision-making modules. I would expect it to have a kind of "team spirit" or "internal integrity" that defines how it goes about solving problems, and which artificially constrains its decision tree from "purely optimal", towards "maintaining (my) personal identity".

In other words, I would expect the bureaucracy to have an identifiable "personality".

Comment author: Desrtopa 18 October 2012 03:58:49PM 1 point [-]

This sounds very little like something I would expect someone who knew what a strong AI was but had never observed a bureaucracy to come up with as a way to determine whether bureaucracies are strong AIs or non-strong AIs.

Not everything that is capable of self-reflection and self-identity is a strong AI; indeed I think it's reasonable to say that out of the sample of observed things capable of self-reflection and self-identity, none of them are strong AIs.

Bureaucracies don't even fulfill the basic strong AI criterion of being smarter than a human being. They may perform better than an individual in certain applications, but then, so can weak AI, and bureaucracies often engage in behavior which would be regarded as insane if engaged in by an individual with the same goal.

Comment author: ialdabaoth 18 October 2012 08:20:30PM 2 points [-]

This sounds very little like something I would expect someone who knew what a strong AI was but had never observed a bureaucracy to come up with as a way to determine whether bureaucracies are strong AIs or non-strong AIs.

That's very plausible; all of my AI research has been self-directed and self-taught, entirely outside of academia. It is highly probable that I have some very fundamental misconceptions about what it is I think I'm doing.

As I mentioned in the original post, I fully admit that I'm likely wrong - but presenting it in a "comment if you like" format to people who are far more likely than me to know seemed like the best way to challenge my assumption, without inconveniencing anyone who might actually have something more important to do than schooling a noob.

Comment author: fubarobfusco 17 October 2012 01:20:55AM *  -1 points [-]

I'm not sure how to tell what sorts of groups of humans have self-reflection. For animals, including human infants, we can use the mirror test. How about for bureaucracies?

I'm not sure whether "team spirit" might be a projection in the minds of members or observers; or in particular a sort of belief-as-cheering for the psychological benefit of members (and opponents). How would we tell?

Likewise, how would we inquire into a bureaucracy's decision tree? I don't know how to ask a corporation to play chess.

Comment author: ialdabaoth 17 October 2012 01:26:47AM 2 points [-]

Bald assertion: the fact that "team spirit" might be a mere projection in the minds of members is as irrelevant to whether it causes self-reflection as the fact that "self-awareness" might be a mere consequence of synapse patterns.

Just because we're more intimately familiar with what "team spirit" feels like from the inside, than we are with what having your axons wired up to someone else's dendrites, doesn't mean that "team spirit" isn't part of an actual consciousness-generating process.

Comment author: fubarobfusco 17 October 2012 01:35:39AM -1 points [-]

"You can't prove it's not!" arguments...?

Recommended reading: the Mysterious Answers to Mysterious Questions sequence.

Comment author: ialdabaoth 17 October 2012 01:42:08AM *  2 points [-]

No, I was presenting a potential counter to the idea that "I'm not sure whether 'team spirit' might be a projection in the minds of members or observers".

It might or might not be a projection in the minds of observers, but I don't think that it's relevant whether it is or not to the questions I'm asking, in the same sense that "are we conscious because we have a homunculus-soul inside of us, or because neurons give rise to consciousness?" isn't relevant to the question of "are we conscious?"

We know we are conscious as a bald fact, and we accept that other humans are conscious whenever we reject solipsism; we happen to be finding out the manner in which we are conscious as a result of our scientific curiosity.

But accepting an entity as "conscious" / "self-aware" / "sapient" does not require that we understand the mechanisms that generate its behavior; only that we recognize that it has behavior that fits certain criteria.