Comment author: Houshalter 05 October 2016 08:43:00PM 2 points [-]

That's not really surprising. Google employs by far the most AI researchers and they have general AI as an actual goal. Deepmind in particular has been pushing for reinforcement learning and general game playing. Which is the first step towards building AI agents that optimize utility functions in complex real world environments, instead of just classifying images or text.

What specific corporation is winning at the moment isn't that relevant. Facebook isn't far behind and has more of a focus on language learning, memory, and reasoning, which are possibly the critical pieces to reaching general intelligence. Microsoft just made headlines for founding a new AI division. Amazon just announced a big competition for the best conversational AIs. Almost every major tech company is trying to get in on this game.

I don't think we are that far away from AGI.

Comment author: rhaps0dy 06 October 2016 09:50:25AM 0 points [-]

I don't think we are that far away from AGI.

At the very least 20 years. And yes Alphabet are the closest, but in 20 years a lot of things can change.

Comment author: hairyfigment 15 June 2016 08:16:15PM 1 point [-]

Because LWers adopting this rule would not produce a swarm of false positives (and therefore I won't do it).

Comment author: rhaps0dy 21 June 2016 07:55:39AM 0 points [-]

This is what I thought. But ChristianKl is right: it doesn't need to. From the first false positive you're already doing damage with almost no cost to you. Sure your address will start to receive more spam, but it will be filtered like the spam you already have is.

But having it in the ISP, or as a really popular extension, would deal a big blow to spam.

Comment author: Elo 14 May 2016 09:59:48PM 4 points [-]

no actually; they want to weed out people who notice spelling. If you notice spelling you also probably notice scams. (this is a commonly known pattern of email scams). if only good scammable people respond; all the better for them.

Comment author: rhaps0dy 15 June 2016 05:10:31PM *  2 points [-]

Today in Hacker News there's a research article speaking exactly of this.

https://news.ycombinator.com/item?id=11909111

Makes me think that a possible method to mitigate spam would be to answer each email with a LSTM-generated blob of text, so the attackers are swarmed with false positives and cannot continue the attack. Of course, this would have to be implemented by the email provider.

Comment author: ArgleBlargle 14 May 2016 06:28:10AM 11 points [-]

Thanks for doing this.

Comment author: rhaps0dy 14 May 2016 12:31:20PM 3 points [-]

Gratitude thread.

What a load of work, Ingres. Thank you for doing this.

Comment author: Kuaiyu 02 March 2016 09:24:01PM 12 points [-]

If I wanted to run an experiment to test how susceptible to scams the LW community actually was, this is exactly how I would do it.

Comment author: rhaps0dy 09 April 2016 12:23:49PM 2 points [-]

I would probably use better spelling in the messages. It reduces credibility of the scammer.