If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Image recognition, courtesy of the deep learning revolution & Moore's Law for GPUs, seems near reaching human parity. The latest paper is "Deep Image: Scaling up Image Recognition", Wu et al 2015 (Baidu):
For another comparison, on pg9 Table 3 shows past performance. In 2012, the best performer reached 16.42%; 2013 knocked it down to 11.74%, and 2014 to 6.66% or to 5.98% depending on how much of a stickler you want to be; leaving ~0.8% left.
EDIT: Google may have already beaten 5.98% with a 5.5% (and thus halved the remaining difference to 0.4%), according to a commenter on HN, "smhx":
To update: the latest version of the Baidu paper now claims to have gone from the 5.98% above to 4.58%.
EDIT: on 2 June, a notification (Reddit discussion) was posted; apparently the Baidu team made far more than the usual number of submissions to test how their neural network was performing on the held-out ImageNet sample. This is problematic because it means that some amount of their performance gain is probably due to overfitting (tweak a setting, submit, see if performance improves, repeat). The Google team is not accused of doing this, so probably the ... (read more)