Comment author:Houshalter
05 October 2016 08:43:00PM
2 points
[-]
That's not really surprising. Google employs by far the most AI researchers and they have general AI as an actual goal. Deepmind in particular has been pushing for reinforcement learning and general game playing. Which is the first step towards building AI agents that optimize utility functions in complex real world environments, instead of just classifying images or text.
What specific corporation is winning at the moment isn't that relevant. Facebook isn't far behind and has more of a focus on language learning, memory, and reasoning, which are possibly the critical pieces to reaching general intelligence. Microsoft just made headlines for founding a new AI division. Amazon just announced a big competition for the best conversational AIs. Almost every major tech company is trying to get in on this game.
Comment author:siIver
06 October 2016 12:06:05AM
2 points
[-]
Is there a relevant difference in how much the eventual winner will incorporate AI safety measures? Or do you think it is merely an issue of actually solving the [friendly AI] problem, and once it is solved, it will surely be used?
Comments (31)
That's not really surprising. Google employs by far the most AI researchers and they have general AI as an actual goal. Deepmind in particular has been pushing for reinforcement learning and general game playing. Which is the first step towards building AI agents that optimize utility functions in complex real world environments, instead of just classifying images or text.
What specific corporation is winning at the moment isn't that relevant. Facebook isn't far behind and has more of a focus on language learning, memory, and reasoning, which are possibly the critical pieces to reaching general intelligence. Microsoft just made headlines for founding a new AI division. Amazon just announced a big competition for the best conversational AIs. Almost every major tech company is trying to get in on this game.
I don't think we are that far away from AGI.
Is there a relevant difference in how much the eventual winner will incorporate AI safety measures? Or do you think it is merely an issue of actually solving the [friendly AI] problem, and once it is solved, it will surely be used?