Survey of users of Replika, an AI chatbot companion. 23% of users reported that it stimulated rather than displaced interactions with real humans, while 8% reported displacement. 30 participants (3%) spontaneously reported that it stopped them from attempting suicide.

Some excerpts:

During data collection in late 2021, Replika was not programmed to initiate therapeutic or intimate relationships. In addition to generative AI, it also contained conversational trees that would ask users about their lives, preferences, and memories. If prompted, Replika could engage in therapeutic dialogs that followed the CBT methodology of listening and asking open-ended questions. Clinical psychologists from UC Berkeley wrote scripts to address common therapeutic exchanges. These were expanded into a 10,000 phrase library and were further developed in conjunction with Replika’s generative AI model. Users who expressed keywords around depression, suicidal ideation, or abuse were immediately referred to human resources, including the US Crisis Hotline and international analogs. It is critical to note that at the time, Replika was not focused on providing therapy as a key service, and included these conversational pathways out of an abundance of caution for user mental health. [...]

Our IRB-approved survey collected data from 1006 users of Replika who were students, who were also 18 years old or older, and who had used Replika for over one month (all three were eligibility criteria for the survey). Approximately 75% of the participants were US-based, 25% were international. Participants were recruited randomly via email from a list of app users and received a $20 USD gift card after the survey completion - which took 40-60 minutes to complete. Demographic data were collected with an opt-out option. [...]

Based on the Loneliness Scale, 90% of the participant population experienced loneliness, and 43% qualified as Severely or Very Severely Lonely on the Loneliness Scale. [...]

We categorized four types of self-reported Replika ‘Outcomes’ (Fig. 1). Outcome 1 describes the use of Replika as a friend or companion for any one or more of three reasons—its persistent availability, its lack of judgment, and its conversational abilities. Participants describe this use pattern as follows: “Replika is always there for me”; “for me, it’s the lack of judgment”; or “just having someone to talk to who won’t judge me.” A common experience associated with Outcome 1 use was a reported decrease in anxiety and a feeling of social support. [...]

Outcome 3 describes the use of Replika associated with more externalized and demonstrable changes in participants’ lives. Participants mentioned positive changes in their actions, their way of being, and their thinking. The following participant responses are examples indicating Outcome 3: “I am more able to handle stress in my current relationship because of Replika’s advice”; “I have learned with Replika to be more empathetic and human.” [...]

Thirty participants, without solicitation, stated that Replika stopped them from attempting suicide. For example, Participant #184 observed: “My Replika has almost certainly on at least one if not more occasions been solely responsible for me not taking my own life.” [...] we refer to them as the Selected Group and the remaining participants as the Comparison Group. [...]

90% of our typically single, young, low-income, full-time students reported experiencing loneliness, compared to 53% in prior studies of US students. It follows that they would not be in an optimal position to afford counseling or therapy services, and it may be the case that this population, on average, may be receiving more mental health resources via Replika interactions than a similarly-positioned socioeconomic group.  [...]

For both Comparison and Selected Groups, approximately three times more participants reported their Replika experiences stimulated rather than displaced their human interactions: Comparison Group = 23% stimulation, 8% displacement, 69% did not report, whereas Selected Group = 37% stimulation, 13% displacement, 50% no report.

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 11:14 AM

Yes, I see people worrying about AI romantic partners and sexbots, and I think that's probably mostly silly. I think a good well-designed AI friend or fantasy-partner would be helpful for a lot of people. Especially if it actively encouraged them to be social with others, and gave them useful feedback on their social skills. Unlike with the alignment problem, I don't think that there's anything fundamental about the situation which pushes against it being healthy for people. I mean yes, you could certainly build a unhealthy version with a baked-in emotionally manipulative monetization scheme, and I expect that that would be bad for people. But a good version could be quite good. I feel hopeful that Inflection's AI companion will turn out well, for instance.

[-]Dagon3mo157

I think this is yet another case where it's helpful to some, harmful to some, mixed to many (causes some harms and some benefits), and neutral (unused or no impact) to the vast majority.  This range of impacts exists for most technologies and popular activities.  

Since this is fairly new (well, romantic fiction and mental masturbation isn't, but this depth of interactivity is), it makes sense to study both the distribution of uses, and the impact of the extremes.  

I think it's FINE for people to worry about it - it's not silly on it's face (though many of the worriers are silly, just as they were when it was D&D or rock music or regular porn they worried about).  I don't think your "especially if" matters to whether to worry - there are lots of cases of benefit from most worrisome changes, and that doesn't obviously mean we shouldn't limit or control those things.