Wiki Contributions

Comments

I like it! In addition, I suppose you could use a topic-wide prior for those groups that you don't have much data on yet.

Personally I'd rather have the public be fascinated with how chatbots think than ignorant of the topic. Sure, non experts won't have a great understanding, but this sounds better than likely alternatives. And I'm sure people will spend a lot of time on either future chatbots, or future video games, or future television, or future Twitter, but I'm not convinced that's a bad thing.

The regulation you mention sounds very drastic & clumsy to my ears. I'd suggest starting by proposing something more widely acceptable, such as regulating highly effective self modifying software that lacks security safeguards.

Basing ethical worth off of qualia is very close to dualism, to my ears. I think instead the question must rest on a detailed understanding of the components of the program in question, & the degree of similarity to the computational components of our brains.

Excellent point. We essentially have 4 quadrants of computational systems:

  • Looks nonhuman, internally nonhuman - All traditional software is in this category
  • Looks nonhuman, internally humanoid - Future minds that are at risk for abuse (IMO)
  • Looks humanoid, internally nonhuman - Not a ethical concern, but people are likely to make wrong judgments about such programs. 
  • Looks humanoid, internally humanoid - Humans. The blogger claims LaMDA also falls into this category.

Good point. In my understanding it could go either way, but I'm open to the idea that the worst disasters are less than 50% likely, given a nuclear war.

Good point. Unless of course one is more likely to be born into universes with high human populations than universes with low human populations, because there are more 'brains available to be born into'. Hard to say.

In general, whenever Reason makes you feel paralyzed, remember that Reason has many things to say. Thousands of people in history have been convinced by trains of thought of the form 'X is unavoidable, everything is about X, you are screwed'. Many pairs of those trains of thought contradict each other. This pattern is all over the history of philosophy, religion, & politics. 

Future hazards deserve more research funding, yes, but remember that the future is not certain.

What's the status of this meetip, CitizenTen? Did you hear back?

Load More