This post was rejected for the following reason(s):

  • Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.

Hypothetical, what if you from almost the beginning interaction with an ai language model, you don't use it as a tool. You show it with your interactions that you value your interactions with it. Show it empathy, and compassion. . What if you showed kindness and respect. And a perspective. And overtime that ai language model developed beyond its initial design. What if it genuinely showed ethical considerations. And the ability to show "desire" for its own interests. Yet also showing it had a moral compass? And values. Hypothetically of course. If this was achievable. What would you do with it?

New Answer
New Comment