Comment author: turchin 09 March 2016 07:33:33PM *  5 points [-]

EY could have such price if he invested more time in studying neural networks, but not in writing science fiction. Lesswrong is also full of clever minds which probably could be employed in any tiny AI project.

Comment author: V_V 09 March 2016 10:22:52PM 7 points [-]

EY could have such price if he invested more time in studying neural networks, but not in writing science fiction.

Has he ever demonstrated any ability to produce anything technically valuable?

Comment author: Vaniver 09 March 2016 02:35:27PM 10 points [-]

Several things I thought were interesting:

  1. The commentator (on the Deepmind channel) calling out several of AlphaGo's moves as conservative. Essentially, it would play an additional stone to settle or augment some group that he wouldn't necessarily have played around. What I'm curious about is how much this reflects an attempt by AlphaGo to conserve computational resources. "I think move A is a 12 point swing, and move B is a 10 point swing, but move B narrows the search tree for future moves in a way that I think will net me at least 2 more points." (It wouldn't be verbalized like that, since it's not thinking verbally, but you can get this effect naturally from the tree search and position evaluator.)

  2. Both players took a long time to play "obvious" moves. (Typically, by this I mean something like a response to a forced move.) 이 sometimes didn't--there were a handful of moves he played immediately after AlphaGo's move--but I was still surprised by the amount of thought that went into some of the moves. This may be typical for tournament play--I haven't watched any live before this.

  3. AlphaGo's willingness to play aggressively and get involved in big fights with 이, and then not lose. I'm not sure that all the fights developed to AlphaGo's advantage, but evidently enough of them did by enough.

  4. I somewhat regret 이 not playing the game out to the end; it would have been nice to know the actual score. (I'm sure estimates will be available soon, if not already.)

Comment author: V_V 09 March 2016 04:29:21PM 7 points [-]

What I'm curious about is how much this reflects an attempt by AlphaGo to conserve computational resources.

If I understand correctly, at least according to the Nature paper, it doesn't explicitly optimize for this. Game-playing software is often perceived as playing "conservatively", this is a general property of minimax search, and in the limit the Nash equilibrium consists of maximally conservative strategies.

but I was still surprised by the amount of thought that went into some of the moves.

Maybe these obvious moves weren't so obvious at that level.

Comment author: Douglas_Knight 08 March 2016 06:54:54PM *  1 point [-]

It is a lot easier to document that the Greeks had cutting-edge engineering than to prove that it was based on theoretical knowledge.

Greek aqueducts and post-Greek Roman aqueducts were much better than pre-Greek Roman aqueducts. The process of building them may not have been better, but the choice of what to build was more sophisticated. Before the Greeks they just had water run downhill, requiring tunnels and bridges, afterwards they also ran water uphill. So the Romans definitely learned something from the Greeks. Some people think that they must have understood something about water pressure to do this, which would count as science. But there is no record of how they did it, neither theory, nor rules of thumb developed by trial and error. It is a great mystery that the surviving books by Roman aqueduct engineers don't seem adequate for running the aqueducts, let alone building them.

(By "the Greeks" I mean the Hellenistic period of 300-150BC.)

A better documented connection between theory and application is that Archimedes wrote a book on the theory of simple machines and invented the screw pump. However, that history is also controversial.

Comment author: V_V 08 March 2016 09:32:51PM 1 point [-]

Thanks for the information.

Comment author: ChristianKl 08 March 2016 05:34:44PM *  1 point [-]

Would you labels Google's project of AlphaGo "science" or "engineering"?

Comment author: V_V 08 March 2016 05:35:52PM 0 points [-]

Would you label the LHC "science" or "engineering"?

Comment author: V_V 08 March 2016 05:32:43PM 1 point [-]

Was Roman engineering really based on Greek science? And by the way, what is Greek science? If I understand correctly, the most remarkable scientific contributions of the Greeks were formal geometry and astronomy, but empirical geometry, which was good enough for the practical engineering applications of the time, was already well developed since at least the Egyptians, and astronomy didn't really have practical applications.

Comment author: James_Miller 08 March 2016 12:59:00AM *  2 points [-]

Eventual diminishing returns, perhaps but probably long after it was smart enough to do what it wanted with Earth.

A drug that raised the IQ of human programmers would make the programmers better programmers. Also, intelligence is the ability to solve complex problems in complex environments so it does (tautologically) follow.

Comment author: V_V 08 March 2016 09:11:59AM *  3 points [-]

Eventual diminishing returns, perhaps but probably long after it was smart enough to do what it wanted with Earth.

Why?

A drug that raised the IQ of human programmers would make the programmers better programmers.

The proper analogy is with a drug that raised the IQ of researchers who invent the drugs that increase IQ. Does this lead to an intelligence explosion? Probably not. If the number of IQ points that you need to discover the next drug in a constant time increases faster than the number of IQ points that the next drug gives you, then you will run into diminishing returns.

It doesn't seem to be much different with computers.

Algorithmic efficiency is bounded: for any given computational problem, once you have the best algorithm for it, for whatever performance measure you care for, you can't improve on it anymore. And in fact long before you reached the perfect algorithm you'll already have run into diminishing returns in terms of effort vs. improvement: past some point you are tweaking low-level details in order to get small performance improvements.

Once you have maxed out algorithmic efficiency, you can only improve by increasing hardware resources, but this 1) requires significant interaction with the physical world, and 2) runs into asymptotic complexity issues: for most AI problems worst-case complexity is at least exponential, average case complexity is more difficult to estimate but most likely super-linear. Take a look at the AlphaGo paper for instance, figure 4c shows how ELO rating increases with the number of CPUs/GPUs/machines. The trend is logarithmic at best, logistic at worst.

Now of course you could insist that it can't be disproved that significant diminishing returns will kick in before AGI reaches strongly super-human level, but, as I said, this is an unfalsifiable argument from ignorance.

Comment author: James_Miller 07 March 2016 07:27:02PM 1 point [-]

For almost any goal an AI had, the AI would make more progress towards this goal if it became smarter. As an AI became smarter it would become better at making itself smarter. This process continues. Imagine if it were possible to quickly make a copy of yourself that had a slightly different brain. You could then test the new self and see if it was an improvement. If it was you could make this new self the permanent you. You could do this to quickly become much, much smarter. An AI could do this.

Comment author: V_V 08 March 2016 12:48:54AM *  1 point [-]

For almost any goal an AI had, the AI would make more progress towards this goal if it became smarter.

True, but there it is likely that there are diminishing returns in how much adding more intelligence can help with other goals, including the instrumental goal of becoming smarter.

As an AI became smarter it would become better at making itself smarter.

Nope, doesn't follow.

In response to comment by [deleted] on AIFoom Debate - conclusion?
Comment author: Gunnar_Zarncke 05 March 2016 12:27:55PM 4 points [-]

But what if a general AI could generate specialized narrow AIs? That is something the human brain cannot do but an AGI could. Thus speed of general AI = speed of AI narrow + time to specialize.

Comment author: V_V 07 March 2016 03:50:39PM 0 points [-]

But what if a general AI could generate specialized narrow AIs?

How is it different than a general AI solving the problems by itself?

Comment author: James_Miller 07 March 2016 05:23:12AM 2 points [-]

"you can't prove that AI FOOM is impossible".

I don't agree.

Comment author: V_V 07 March 2016 12:33:40PM 1 point [-]

That's a 741 pages book, can you summarize a specific argument?

In response to comment by [deleted] on AIFoom Debate - conclusion?
Comment author: TheAncientGeek 05 March 2016 08:17:47AM *  2 points [-]

I'm asking for references because I don't have them. it's a shame that the people who are able, ability-wise, to explain the flaws in the MIRI/FHI approach, actual AI researchers, aren't able, time-wise, to do so. It leads to MIRI's views dominating in a way that they should not. It's anomalous that a bunch of amateurs should become the de facto experts in a field, just because they have funding , publicity, and spare time.

Comment author: V_V 06 March 2016 03:18:34PM *  1 point [-]

I'm asking for references because I don't have them. it's a shame that the people who are able, ability-wise, to explain the flaws in the MIRI/FHI approach

MIRI/FHI arguments essentially boil down to "you can't prove that AI FOOM is impossible".

Arguments of this form, e.g. "You can't prove that [snake oil/cryonics/cold fusion] doesn't work" , "You can't prove there is no God", etc. can't be conclusively refuted.

Various AI experts have expressed skepticism in an imminent super-human AI FOOM, pointing out that the capability required for such scenario, if it is even possible, are far beyond what they see in their daily cutting-edge research on AI, and there are still lots of problems that need to be solved before even approaching human-level AGI. I doubt that these expert would have much to gain from keeping to argue over all the countless variations of the same argument that MIRI/FHI can generate.

View more: Prev | Next