You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

shminux comments on Could a digital intelligence be bad at math? - Less Wrong Discussion

3 Post author: leplen 20 January 2016 02:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 21 January 2016 03:29:41PM 1 point [-]

My implied point is that the line between hard math and easy math for humans is rather arbitrary, drawn mostly by evolution. AI is designed, not evolved, so the line between hard and easy for AI is based on the algorithm complexity and processing power, not on millions of years of trying to catch a prey or reach a fruit.

Comment author: Houshalter 29 January 2016 10:07:39AM 0 points [-]

I'm not sure I agree with that. Currently most progress in AI is with neural networks, which are very similar to human brains. Not exactly the same, but they have very similar strengths and weaknesses.

We may not be bad at things because we didn't evolve to do them. They might just be limits of our type of intelligence. NNs are good at big messy analog pattern matching, and bad at other things like doing lots of addition or solving chess boards.

Comment author: shminux 30 January 2016 06:23:02AM 0 points [-]

They might just be limits of our type of intelligence. NNs are good at big messy analog pattern matching, and bad at other things like doing lots of addition or solving chess boards.

That could be true, we don't know enough about the issue. But interfacing a regular computer with a NN should be a... how should I put it... no-brainer?