You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

CellBioGuy comments on Open thread, Nov. 3 - Nov. 9, 2014 - Less Wrong Discussion

4 Post author: MrMind 03 November 2014 09:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (310)

You are viewing a single comment's thread. Show more comments above.

Comment author: CellBioGuy 03 November 2014 03:31:19PM 2 points [-]

Let the hype curve for certain recent advances in computer technology flatten out; the references will vanish again.

Comment author: Artaxerxes 03 November 2014 03:36:50PM 1 point [-]

Plausible, but which certain advances are you thinking of? Do you think what you're saying is likely? Does that mean next time there are advances, the references will start up again?

Comment author: CellBioGuy 04 November 2014 04:39:15AM 1 point [-]

I was specifically thinking of the preliminary successes with autonomous vehicles Google has been having, a few high-profile walking robots, and some natural language parsers. Seeing as similar hype clusters have occurred in the past, I would expect them to recur in the future.

Comment author: [deleted] 04 November 2014 09:30:40AM 1 point [-]

Why do you think these advances will "flatten out"?

Comment author: CellBioGuy 05 November 2014 02:48:22PM 2 points [-]

I was referring to the hype about them. When something's new its the subject of all kinds of breathless pronouncements about how it will utterly change the world - then when it enters actual use, people find all the pitfalls and limits that it has in practice that the abstract-concept-of-it does not have, and get disaffected with it. Then it just kind of becomes part of the background, not really noticed.

Comment author: gattsuru 05 November 2014 05:56:40PM 0 points [-]

Some of these advances are also nearing the end of low-hanging fruit, most obviously image recognition. We're quickly approaching human levels for simple problems, and while there's a massive amount of space for optimization and better training, these aren't likely to be newsworthy in the same way.

Comment author: ChristianKl 05 November 2014 07:37:19PM 1 point [-]

The link still suggest that humans are much better.

I don't see how better than human level at image recognition won't provide newsworthy stories.

We are still a long way from security cameras in companies simply identifying every person who walks around via facial recognition.

Question such as whether a school or university is allowed to track attendance rates via facial recognition software will produce social debates.

Evernote does a bit image recognition for documents but aside from that I haven't used any computer guided image recognition for a while.