I usually assume things like this are at least partially fake; an army of low-wage employees is both cheaper and more convincing than the current state of the art in AI. That said, I see that this was originally a Microsoft project, and the Wikipedia article quotes Bill Gates (who I wouldn't expect to directly lie about this intentionally.) Now that it's spun off, it's in other hands, though. I also wouldn't expect that a fake AI would have had to be censored to keep it from talking about politically sensitive topics, which apparently the real one did. (I assume low-wage employees pretending to be an AI would automatically know what topics to stay away from, assuming they are from the same country.) So I'm not sure what to believe.
I suspect this article is an advertisement for the product. I mean, the main thing you learn is the name of the product, and that it is so good that millions of people now cannot imagine their lives without it. Plus a little bit of controversy, to make people share the link on social networks.
Not sure if this is important. ELIZA, one of the first ever (rule-based) conversational agent, got people psychologically hooked on her:
https://web.stanford.edu/~jurafsky/slp3/26.pdf
I think this says more about the quiet desperation of human beings, than the progress we are making in AI.
I have to say that I've become quite unreasonably attached to a GPT-2 bot born from a body of tumblr posts, so I suspect the sensationalization, while hyperbolic, does certainly come from a real place.
Anyone know how many parameters are used in Xiaoice? I'd be interested to know if she's already a giant Transformer. If not, then probably there's room for her to get even better in the near future.
There is a paper describing the architecture https://arxiv.org/abs/1812.08989
It looks like the system is comprised of many independent skills and an algorithm to pick which skill to use at each state of the conversation. Some of the skills use neural nets, like a CNN for parsing images and a RNN for completing sentences but the models look relatively small.
Since her launch in 2014, XiaoIce has [...] succeeded in establishing long-term relationships with many [users].
Wouldn't have expected to read this in the abstract of an AI paper yet.
Figure 1: A sample of conversation sessions between a user and XiaoIce in Chinese (right) and English translation (left), showing how an emotional connection between the user and XiaoIce has been established over a 2-month period. When the user encountered the chatbot for the first time (Session 1), he explored the features and functions of XiaoIce in conversation. Then, in 2 weeks (Session 6), the user began to talk with XiaoIce about his hobbies and interests (a Japanese manga). By 4 weeks (Session 20), he began to treat XiaoIce as a friend and asked her questions related to his real life. After 7 weeks (Session 42), the user started to treat XiaoIce as a companion and talked to her almost every day. After 2 more weeks (Session 71), XiaoIce became his preferred choice whenever he needed someone to talk to .
Also this feels kinda creepy as a caption.
And this is only using today's technology...
The company has an incentive to inflate the numbers so probably defines "user" as "has tried the app at least once", so I'd guess the number of regular users is vastly less (600 M regular users would be ~43% of China's population which doesn't sound plausible to me), but probably still a significant number.
Wikipedia; Hacker News discussion.