Wiki Contributions

Comments

I call all those examples opinions.

Sure, opinions come to people from a few different sources. I speculate that interpersonal transmission is the most common, but they can also originate in someone's head, either via careful thought or via a brief whim.

People don't have opinions - opinions have people.

Often, one hears someone express a strange, wrong-seeming opinion. The bad habit is to view this as that person's intentional bad action. The good habit is to remember that the person heard this opinion, accepted it as reasonable, & might have put no further thought into the matter. 

Opinions are self-replicating & rarely fact-checked. People often subscribe to 2 contradictory opinions.

Epistemic status: I'm trying this opinion on. It's appealing so far. 

I like it! In addition, I suppose you could use a topic-wide prior for those groups that you don't have much data on yet.

Personally I'd rather have the public be fascinated with how chatbots think than ignorant of the topic. Sure, non experts won't have a great understanding, but this sounds better than likely alternatives. And I'm sure people will spend a lot of time on either future chatbots, or future video games, or future television, or future Twitter, but I'm not convinced that's a bad thing.

The regulation you mention sounds very drastic & clumsy to my ears. I'd suggest starting by proposing something more widely acceptable, such as regulating highly effective self modifying software that lacks security safeguards.

Basing ethical worth off of qualia is very close to dualism, to my ears. I think instead the question must rest on a detailed understanding of the components of the program in question, & the degree of similarity to the computational components of our brains.

Excellent point. We essentially have 4 quadrants of computational systems:

  • Looks nonhuman, internally nonhuman - All traditional software is in this category
  • Looks nonhuman, internally humanoid - Future minds that are at risk for abuse (IMO)
  • Looks humanoid, internally nonhuman - Not a ethical concern, but people are likely to make wrong judgments about such programs. 
  • Looks humanoid, internally humanoid - Humans. The blogger claims LaMDA also falls into this category.

Good point. In my understanding it could go either way, but I'm open to the idea that the worst disasters are less than 50% likely, given a nuclear war.

Load More