Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: V_V 30 January 2016 03:25:52PM 3 points [-]

You have to be more specific with the timeline. Transistors were invented in 1925 but received little interests due to many technical problems. It took three decades of research before the first commercial transistors were produced by Texas Instruments in 1954.

Gordon Moore formulated his eponymous law in 1965, while he was director of R&D at Fairchild Semiconductor, a company whose entire business consisted in the manufacture of transistors and integrated circuits. By that time, tens of thousands transistor-based computers were in active commercial use.

Comment author: EHeller 01 February 2016 03:11:28AM 0 points [-]

It wouldn't have made a lot of sense to predict any doublings for transistors in an integrated circuit before 1960, because I think that is when they were invented.

Comment author: Houshalter 29 January 2016 08:12:18PM 3 points [-]

I don't have a source on this, but I remember an anecdote from Kurzweil that scientists who worked on early transistors were extremely skeptical about the future of the technology. They were so focused on solving specific technical problems that they didn't see the big picture. Whereas an outside could have just looked at the general trend and predicted a doubling every 18 months, and they would have been accurate for at least 50 years.

So that's why I wouldn't trust various ML experts like Ng that have said not to worry about AGI. No, the specific algorithms they work on are not anywhere near human level. But the general trend, and the proof that humans aren't really that special, is concerning.

I'm not saying that you should just trust Yudkowsky or me instead. And expert opinion still has value. But maybe pick an expert that is more "big picture" focused? Perhaps J├╝rgen Schmidhuber, who has done a lot of notable work on deep learning and ML, but also has an interest in general intelligence and self improving AIs.

And I don't have any specific prediction from him on when we will reach AGI. But he did say last year that he believes we will reach monkey level intelligence in 10 years. Which is quite a huge milestone.

Another candidate might be the group being discussed in this thread, Deepmind. They are focused on reaching general AI instead of just typical machine vision work. That's why they have such a strong interest in game playing. I don't have any specific predictions from them either, but I do get the impression they are very optimistic.

Comment author: EHeller 30 January 2016 12:04:18AM 0 points [-]

This claim doesn't make much sense from the outset. Look at your specific example of transistors. In 1965, an electronics magazine wanted to figure out what would happen over time with electronics/transistors so they called up an expert, the director of research of Fairchild semiconductor. Gordon Moore (the director of research), proceeded to coin Moore's law and tell them the doubling would continue for at least a decade, probably more. Moore wasn't an outsider, he was an expert.

You then generalize from an incorrect anecdote.

Comment author: Jiro 25 January 2016 12:03:26AM 6 points [-]

"Dancing bear" is a term. It doesn't literally indicate that he's comparing black people to animals.

Comment author: EHeller 25 January 2016 04:21:06AM 7 points [-]

I'm not sure the connotation of the term (i.e. a black person being successful at anything is so shocking it's entertainment value all on it's own) makes the statement any better. Especially when discussing, say, one of the most important American musicians of all time (among others).

Comment author: EHeller 16 January 2016 06:12:04AM *  5 points [-]

I thought the heuristic was "if I think I passed the hotel, I was going too fast to notice. I better slow down so I see it when I come up on it, or so I might recognize a landmark/road that indicates I went too far." We slow down not because we are splitting the difference between turning around and continuing on. We slow down to make it easier to gather more information, a perfectly rational response.

In response to comment by EHeller on LessWrong 2.0
Comment author: btrettel 10 January 2016 04:06:20AM 0 points [-]

Interesting point. Can you give an example of this knowledge?

I'm working on a PhD myself (in engineering), but the main things I feel I get from this are access to top scholars, mentoring, structure, and the chance to talk with others who are interested in learning more and research. One could also have access to difficult to obtain equipment in academia, but a large corporation could also provide such equipment. In principle I don't think these things are unique to academia.

In response to comment by btrettel on LessWrong 2.0
Comment author: EHeller 10 January 2016 06:49:49AM 2 points [-]

Sure, not 100% unique to academia, there are also industrial research environments.

My phd was in physics, and there were lots of examples. Weird tricks for aligning optics benches, semi-classical models that gave good order of magnitude estimates despite a lack of rigour, which estimates from the literature were trust worthy (and which estimates were garbage). Biophysics labs and material science lab all sorts of rituals around sample and culture growth and preparation. Many were voodoo, but there were good reasons for a lot of them as well.

Even tricks for using equipment- such and such piece of equipment might need really good impedance matching at one connection, but you could get by being sloppy on other connections because of reasons A, B and C,etc.

A friend of mine in math was stuck trying to prove a lemma for several months when famous professor Y suggested to him that famous professor Z had probably proven it but never bothered to publish.

In response to comment by IlyaShpitser on LessWrong 2.0
Comment author: Risto_Saarelma 09 January 2016 08:57:44AM *  1 point [-]

Yeah, I am sure enough about this not happening that I am willing to place bets. There is an enormous amount of intangibles Coursera can't give you (I agree it can be useful for a certain type of person for certain types of aims).

Agree that being inside academia is probably a lot bigger deal than people outside it really appreciate. We're about to see the first generation that grew up with a really ubiquitous internet come to grad school age though. Currently in addition to the assumption that generally clever people will want to go to university, we've treated it as obvious that the Nobel prize winning clever people will have an academic background. Which has been pretty much mandatory, since that used to be the only way you got to talk with other academicians and to access academic publications.

What I'm interested in now is whether in the next couple decades we're going to see a Grigori Perelman or Shinichi Mochizuki style extreme outlier produce some result that ends up widely acknowledged to be an equally big deal as what Perelman did, without ever having seen the inside of an university. You can read pretty much any textbook or article you want over an internet connection now, and it's probably not impossible to get professional mathematicians talking with you even when they have no idea who you are if it's evident from the start that you have some idea what their research is about. And an extreme outlier might be clever enough to figure things on their own, obsessive enough to keep working on them on their own for years, and somewhat eccentric so that they take a dim view on academia and decline to play along out of principle.

It'd basically be a fluke statistically, but it would put a brand new spin on the narrative about academia. Academia wouldn't be the obvious one source of higher learning anymore, it'd be the place where you go when you're pretty smart but not quite good and original enough to go it alone.

Comment author: EHeller 09 January 2016 08:51:06PM 2 points [-]

In STEM fields, there is a great deal of necessary knowledge that simply is not in journals or articles, and is carried forward as institutional knowledge passed around among grad students and professors.

Maybe someday someone clever will figure out how to disseminate that knowledge, but it simply isn't there yet.

Comment author: Gunnar_Zarncke 27 November 2015 05:03:45PM 0 points [-]

By this line of reasoning almost all past theories can the discredited. People use a theory to make predictions and act on them. Only later do you learn the shortcomings. If you don't have empiricism you don't even have a tool to systematically notice your error. I think this is a fully general counter argument.

Comment author: EHeller 27 November 2015 10:54:51PM 2 points [-]

No, the important older theories lead to better theories.

Newton's gravitational physics made correct predictions of limited precision, and Newton's laws lead to the development of Navier-Stokes, kinetic theories of gasses,etc. Even phlogiston lead to the discovery of oxygen and the modern understanding of oxidation. You don't have to be 100% right to make useful predictions.

Vitalism, on the other hand, like astrology, didn't lead anywhere useful.

Comment author: Gunnar_Zarncke 26 November 2015 08:29:49PM 0 points [-]

Quantum theories are responsible for a lot of the quack ideas too. I fear this isn't enough to make an idea ridiculous.

Comment author: EHeller 27 November 2015 01:01:23AM 3 points [-]

But quantum theory also makes correct predictions, and mainstream physics does not en masse advocate quackery. Vitalism never worked, and it lead the entire medical community to advocate actively harmful quackery for much of the 19th century.

Comment author: ChristianKl 25 November 2015 11:42:35PM 3 points [-]

It's not ridiculous because it's a concept that has been used to advance science. For many practical applications it makes sense to treat living entities different from non-living one's just as for many practical applications Newton's physics is useful. The fact that modern physics showed Newton's physics to be inaccurate doesn't make it ridiculous.

Comment author: EHeller 26 November 2015 02:36:08AM 1 point [-]

No, vitalism wasn't just a dead end, it was a wrong alley that too many people spent time wandering down. Vital theories were responsible for a lot of the quack ideas of medical history.

Comment author: Jiro 22 November 2015 03:43:48AM 0 points [-]

.If being majority Christian means being tyrannical, the USA is currently a tyranny, and so is every other Western country.

The US is majority Christian, but not majority alieving-Christians.

Comment author: EHeller 22 November 2015 04:26:40AM 1 point [-]

I don't think that is true? There is a huge contingent of evangelicals (last I checked, a bit under half of Americans believe in creationism), it only takes a few non-creationist but religious Christians to get to a majority.

View more: Next