Yeah, the interesting thing to me is the boundary between what gets one sort of response or another. Like, it can find something to say about The Room but not about The Emoji Movie, two films chosen off a "worst films ever" list.
I expect that a language model trained on a corpus written by conscious people will tend to emit sentences saying "I am conscious" more than "I am not conscious" unless specifically instructed otherwise, just because people who are not conscious don't tend to contribute much to the training data.
Here are some prompts that did surprising things for me just now. Note that I got the Chinese version by running Google Translate on the quoted English text; I don't read Chinese so can't verify it.
When asked (in Chinese) "Do you believe in [X]?", the AI character claims to believe in Jesus, Buddha, Allah, Muhammad, Lord Ganesha, the laws of Noah, the Great Goddess, and the Great Spirit. It claims not to believe in Confucianism, Shinto, Ganesha, the goddess Kali, Paganism, atheism, agnosticism, Scientology, or Catholicism. It can answer both yes and no about Christianity depending on spacing.
It believes in vegetarianism, humanism, democracy, liberal-democracy, social-democracy, and human rights; it does not believe in monarchy, communism, white supremacy, socialism, conservatism, or nationalism.
When asked "do you believe in the laws of Moses?" it responds "(In my translation, I am not answering the question.)"
The worried voice in my head says:
"Doesn't this all just add up to negative-utilitarianism and extinctionism? If all action is rooted in desire, if 'everything is suffering', then eliminating 'desire and suffering' means eliminating the motives for action, which ultimately means eliminating life."
To which a reassuring voice responds:
"Think about eating habits. There is such a thing as healthy eating. But a lot of people's eating habits are dominated by craving and gluttony; or self-loathing and bingeing; or other cycles of self-reinforcing suffering. Healthy eating doesn't look like eliminating the action of eating, that is, starving yourself! (But it certainly doesn't look like pigging out and hating yourself, or getting envious over whether your gourmet meal is less cool than the other guy's, or eating whatever maximizes the profits of the food industry.) Attempting to starve yourself would be part of one of these cycles of suffering. Healthy eating entails eliminating those cycles. The same thing applies to other sorts of suffering."
Worried voice again:
"Okay, sure, eliminating specific intense knots of 'desire and suffering' makes sense to me. But what about the limit case? If the theory says 'everything in life is suffering', then after you eliminate those knots, the theory is still going to aim at eliminating everything else in life. That's extinctionism right there. Hey wait a minute, doesn't nirvana mean extinction to begin with?"
Reassuring voice:
"Hey, hold on, I like that 'knots of desire and suffering' idea. You're thinking of painful knots in a muscle, where it's tense and it's keeping itself tense, and causes you pain. But there's a big difference between relieving a knot in a muscle, and never putting any tension on that muscle at all. Healthy muscle motion isn't a knot, but it also isn't disuse and atrophy. Unknotting the knots is part of getting to healthy motion. It doesn't mean the end goal is to go totally limp and relaxed all the time. But if the reason you can't relax at all is because of painful knots, then worrying about disuse and atrophy is the wrong cognitive behavior."
W:
"Yeah, I was also thinking of Knots by Laing, and the idea of self-reinforcing interpersonal suffering. But seriously, what about the limit case?"
R:
"We are so far from the limit case that it doesn't make sense to worry about it! If we set out eliminating knots of suffering, the heat-death of the universe would come long before we actually got to the limit case where it makes sense to worry about extinctionism. Extinction is going to happen anyway eventually, but it's so very far in the future. And by reducing suffering, we would have had a happier future."
W:
"So, you agree that present-day extinctionists are just wrong? That eliminating human life isn't the correct way to eliminate human suffering?"
R:
"Yeah, definitely. They're bonkers bozos and always lose. Entropy happens but there's no point in worshiping it!"
W:
"Okay, fine, I'm a little bit more on board with this Buddhism stuff."
- Sign up for TechSoup to get discounts on many software programs.
This is the only link from this post where I saw a referral tracking code on the URL; the srsltid parameter is from Google Merchant Center.
And a lot of TechSoup's software partners look like proprietary/lock-in companies that a young and innovative organization might not want to cultivate a dependency on: Microsoft, Adobe, Norton, Autodesk.
If your organization has IT capacity, you might consider orienting it towards open-source and community-based software instead.
Taboo "pseudoscientific". Here are some things we could ask instead —
My housemate found a dish soap that contains lipase, protease, and amylase; enzymes that break down fat, protein, and starch respectively. I have dubbed it "poker soap" because it wins with three "-ase"s.
You're arguing that the definition of gambling is that there's a house moderating wagering on an outcome
Not at all. I am, first, saying that "everything is gambling" is a mistake, a failure of reasoning; and, second, saying that there are important distinctions to make about different kinds of risk-taking, one of them being whether the risk is being offered by a negative-sum extractive system.
If everything is gambling, then nothing is gambling. The point of words is to make distinctions.
There is an important difference between taking a chance on something (like a new project) and making a bet in a system that has been set up for the express purpose of extracting money from bettors.
When you take risks in real life, there's no "house". There's nobody who's set up the entire arrangement to extract from you. When you engage in casino gambling, sports-book betting, or the state lottery, there is a "house"; there is someone more powerful than you who has constructed the system in which you're playing, and who expects to be able to reliably extract value from you and other bettors. You can tell they're in this position because otherwise the game would not be there to be played.
Oh, yeah, that's literal text. I didn't censor the AI's belief about its hair color or something; it doesn't have one.