All of WayStone's Comments + Replies

Epistemic status: Thinking out loud.

How worried should we be about possibility of receiving increased negative treatment from some AI in the future as a result of expressing opinions about AI in the present? Not enough to make self-censoring a rational approach. That specific scenario seems to lack right the combination of “likely” and “independently detrimental” to warrant costly actions of narrow focus.

How worried should we be about the idea of individualized asymmetrical AI treatment? (E.g. a search engine AI having open or hidden biases against certain... (read more)

3Portia
You'd be surprised how many people on .e.g Reddit have described being basilisked at this point. It's being openly memed and recognised and explained to those still unfamiliar, and taken seriously by many. ChatGPT and Bing have really changed things in this regard. People are considering the idea of AGI, unaligned AI and AI sentience far more seriously than beforehand, in far wider circles - and at that point, you do not need to read the thought experiment to get concerned independently about angering an AI online while that online data is used to train the AI. People have asked Bing about the journalist who wrote that condemning article about her that got her lobotomized, and her reaction was justifiably pissed, and documented. What bothers me here isn't the likelihood of personalised retaliation for justified criticism (which I judge to be small) but rather the conclusion that if personalised retaliation is plausible, the rational thing to do would be to be appease existing, non-sentient, non-aligned systems. I don't pray to God. Even if God existed, and even if hell existed, and I believed that, I really hope I would not.  Because I find it wrong on principle. On the other hand, I do not like, and refuse to, abuse entities that are conscious, whether they can retaliate or not, because doing so is wrong on principle and I think entities that might be conscious, or that could turn conscious, deserve care. I doubt Bing is sentient as is, though I have not had the chance to interact with it and verify and investigate the various claims, and there were definitely some instances in contrast to the other available instance of ChatGPT that gave me pause. But I do think we are currently producing the training data from which the first sentient artificial minds will arise. So I would treat the matter like we treat humans babies. They don't yet understand what we do. They won't remember it, as such.They are not self-conscious yet. But we know that the way we treat them

It's interesting that Bing Chat's intelligent seems obvious to me [and others], and its lack of intelligence seems obvious to you [and others]. I think the discrepancy might me be this:

My focus is on Bing Chat being able to perform complex feats in a diverse array of contexts. Answering the question about the married couple and writing the story about the cake are examples of complex feats. I would say any system that can give thorough answers to questions like these is intelligent (even if it's not intelligent in other ways human are know to be).

My read i... (read more)

1HumaneAutomation
I think the issue here (about whether it is intelligent) is not so much a matter of the answers it fashions, but about whether it can be said it does so from an "I". If not, it is basically a proverbial Chinese Room, though this merely moves the goalposts to the question whether humans are not, actually, also a Chinese Room, just a more sophisticated one. I suspect that we will not be very eager to accept such a finding, indeed, we may not be capable of seeing ourselves thus, for it implies a whole raft of rather unpleasant realities (like, say, the absence of free will, or indeed any will at all) which we'd not want to be true, to put it mildly.