That is not an argument against “the robots taking over,” or that AI does not generally pose an existential threat. It is a statement that we should ignore that threat, on principle, until the dangers ‘reveal themselves,’ with the implicit assumption that this requires the threats to actually start happening.
A lot of people will end up falling in love with chatbot personas, with the result that they will become uninterested in dating real people, being happy just to talk to their chatbot.
Good. If substantial number of men do this, dating market will become more tolerable, presumably.
Especially in some countries, where there are absurd demographic imbalances. Some stats from Poland, compared with Ireland (pics are translated using some online util, so they look a bit off):
Okay, I went off-topic a little bit, but these stats are so curiously bad I couldn't help myself.
Maybe GPT-3 could be used to find LW content related to the new post, using something like this: https://gpt-index.readthedocs.io
Unfortunately, I didn't get around to doing anything with it yet. But it seems useful: https://twitter.com/s_jobs6/status/1619063620104761344
For a more realistic example, consider the DNA sequence for smallpox: I'd definitely rather that nobody knew it, than that people could with some effort find out, even though I don't expect being exposed to that quoted DNA information to harm my own thought processes.
...is it weird that I saved it in a text file? I'm not entirely sure if I've got the correct sequence tho. Just, when I learned that it is available, I couldn't not do it.
@gwern wrote am explanation why this is surprising (for some) [here](https://forum.effectivealtruism.org/posts/Mo7qnNZA7j4xgyJXq/sam-altman-open-ai-discussion-thread?commentId=CAfNAjLo6Fy3eDwH3)
It is still a mystery to me what is Sam's motive exactly.