Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I actually conclude the opposite from you in terms of the value of "conflationary alliances".  I will pick on the examples you provided:
Alice ignores Bob's decent argument against eating porkchops using an appeal to authority (if we consider science to be authoritative, which I do not) or an appeal to popularity (if we consider science to be valid because of popular recognition that certain hypotheses have never been disproven).  The fact that Bob and Alice are having the discussion opens the door for Bob to explain his own meaning and show Alice that, although her use of the term is exclusive to humans, she may recognize features of pigs which deepen her understanding of herself and her affinity for the group in which they met.

Dana invites Charlie to use the same weak argument as Alice did, and Charlie accepts.  If he wanted to defend his fear of AI, he could have better explained what he meant, again opening the door to a deeper understanding of himself and the reasons he is hanging out with Dana, and she could have learned something instead of restricting her responsibility to consider others' points of view to "what has reached consensus".

Faye uses a lack of knowledge as an excuse to avoid treating AI "as valuable in the way humans are".  Eric does not point this out to her or otherwise take the opportunity to defend the moral patience he attributes to AI.

I don't view any of these failures on the parts of your characters to be a result of sloppy definitions or conflation of alliances.  While you may be arguing that such conflation makes these kinds of dodges easier, those who want to dodge will find a way to do so and those who want to engage honestly will hone their definitions.  I think conflation alliances create more opportunities for such progress.

I'd also like to point out that nearly all words share the kind of conflationary essence simply because Plato was wrong about the world of ideals.  We discovered that other people made different sounds to refer to different things and our tendency to use language grew out of this.  Every brain has its own very personal neural pattern to represent the meaning of every word the owner of the brain understands.  They match closely enough for us to communicate complicated ideas and develop technology, but if you dig into the exact meaning for any individual, you will find differences.

Lastly, I'd like to offer you a different solution than establishing a clear and definite meaning for "consciousness" or any other term.  It requires a bit more agility than you might be comfortable with because the persistence of alliances is not nearly as important as developing deeper understanding of others.  The solution is to point to the differences openly and gratefully when you see people whose use of a word like consciousness differs to the degree that you'd say they shouldn't be allied.  Openly, because I agree that those wishing the alliance to persist may attempt to downplay the differences.  Gratefully because the disagreement is, like all alternative and reasonable hypotheses in science, an opportunity for one or both parties to deepen their understanding of each other and the whole group.

 

I said "a bit more agility" because I noticed that you worked for a PhD, and that reminded me of Jeff Schmidt's book, Disciplined Minds, about how we constrain ourselves when we aim to earn what others might provide to us for being a certain way.   I mean no offense, and very much prefer to relieve you of a feeling I think you might have, that things should be more orderly.  Chaos is beautiful when you can accept it, and sometimes it makes more sense than you expected it to, if you give it a chance.

"In a moral dilemma where you lost something either way, making the choice would feel bad either way, so you could temporarily save yourself a little mental pain by refusing to decide." From Harry Potter and the Methods of Rationality, which I'm reading at hpmor.com. My solution is to make that dilemma more precise so that I will know which way I'd go.  It nearly always requires more details than the creator of the hypothetical is willing to provide.

I am the webmaster of voluntaryist.com since I inherited it from the previous owner and friend of mine, Carl Watner.  It seems to me that coercion is the single most deleterious strategy that humans have, and yet it's the one on which we rely on a regular basis to create and maintain social cohesion.  If we want to minimize suffering, creating suffering is something we should avoid.  Coercion is basically a promise to create suffering.  I seek people who can defend coercion so that I can learn to see things more accurately, or help others do so.