Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Grit 27 July 2015 03:20:07PM *  5 points [-]

Published 4 hours ago as of Monday 27 July 2015 20.18 AEST:

Musk, Wozniak and Hawking urge ban on AI and autonomous weapons: Over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a “military artificial intelligence arms race” and calling for a ban on “offensive autonomous weapons”.

Link from Reddit front page

Link from The Guardian

Comment author: Thomas 27 July 2015 03:49:43PM 1 point [-]

Outlaw it, and only outlaws will have it.

Comment author: drethelin 26 July 2015 11:58:11PM 1 point [-]

You don't necessarily spend a bunch of money on sending a contact message to every planet with oxygen.

Comment author: Thomas 27 July 2015 05:23:09AM 0 points [-]

Why not? Or rather make an expedition there ASAP.

At least. More likely, you have a large colonization plan underway and this planet is in your path anyway.

Comment author: drethelin 26 July 2015 04:07:51AM 2 points [-]

The fermi paradox is not a paradox. We have not sent out enough signal to be very noticeable, and we do not have the instruments to detect almost any alien signals. Due to signal attenuation, we would be pretty much unable to notice anything further than around 10 lightyears, and even within that range only if the signal was being beamed directly at us. The same problem applies to any aliens, only the paradox there is delayed by a further time problem: the speed of light. We've only been emitting radio for around 100 years, so there's a radius of only 100 light years within which aliens could've detected us and decided to send a signal, and an even smaller 50 lightyear radius in which we might have a chance of noticing it.

Comment author: Thomas 26 July 2015 10:20:58AM 1 point [-]

We've only been emitting radio for around 100 years

We'we been emitting the presence of oxygen in our atmosphere for about 1 billion years. Every nontrivial alien would notice this.

Comment author: Thomas 22 July 2015 07:36:17PM -1 points [-]

I think you have to invent, not just to learn this.

Comment author: jacob_cannell 21 July 2015 09:27:59PM 2 points [-]

Oh yes.

A month ago I touched on this topic in "The Brain as a Universal Learning Machine". I intend to soon write a post or two specifically focusing on near term predictions for the future of DL AI leading to AGI. My main counterintuitive point is that the brain is actually not that powerful at all at the circuit level.

Comment author: Thomas 21 July 2015 09:39:15PM 0 points [-]

My main counterintuitive point is that the brain is actually not that powerful at all at the circuit level.

Quite possible, even quite likely. I think that the nature is trying to tell us this, by just how bad we humans are at arithmetic, for example.

Comment author: turchin 21 July 2015 03:39:24PM 0 points [-]

One may joke that the idea of creation of Friendly AI is one of the same class of landmines (hope not) :) Perpetuum mobile certainly is.

Comment author: Thomas 21 July 2015 04:35:51PM 4 points [-]

Human never knows, if it is a landmine or just a very difficult task. So does not know the AI.

I have an advice against those landmines, though. Do not use all of your time for something which has not been solved for a long time. Also decrease your devotion with time.

I suspect, there were brilliant mathematicians in the past, who spent their entire life to Goldbach's conjecture or something of that kind. That's why we have never heard of them, this "landmine" rendered them obscure. Had they choose something lighter (or even possible) to solve, they could be famous.

Comment author: Thomas 21 July 2015 03:27:42PM 3 points [-]

The natural "philosophical landmines" already work with at least some people. They put some or even all there resources into something they can't possibly achieve. Newton and "the problem of Trinity", for example.

Comment author: Houshalter 20 July 2015 11:08:16AM 7 points [-]

You are not alone. I think NNs are definitely the best approach to AI, and recent progress is quite promising. They have had a lot of success on a number of different AI tasks. From machine vision to translation to video game playing. They are extremely general purpose.

Here's a recent quote from Schmidhuber (who I personally believe is most likely to create AGI.)

Schmidhuber and Hassabis found sequential decision making as a next important research topic. Schmidhuber’s example of Capuchin monkeys was both inspiring and fun (not only because he mistakenly pronounced it as a cappuccino monkey.) In order to pick a fruit at the top of a tree, Capuchin monkey plans a sequence of sub-goals (e.g., walk to the tree, climb the tree, grab the fruit, …) effortlessly. Schmidhuber believes that we will have machines with animal-level intelligence (like a Capuchin smartphone?) in 10 years.

Schmidhuber’s answer was the most unique one here. He believes that the code for truly working AI agents will be so simple and short that eventually high school students will play around with it. In other words, there won’t be any worry of industries monopolizing AI and its research. Nothing to worry at all!

Comment author: Thomas 20 July 2015 03:37:34PM 4 points [-]

Meanwhile I also saw what Schmidhuber has to say and it is very interesting. He is talking about the second NN renaissance which is now.

I wouldn't be to much surprised, if a dirty general AI would be achieved this way. Not that it's very likely yet, but possible. And it could be quite nasty, as well. Perhaps it's not only the most promising venue, but also the most dangerous one.

Comment author: Thomas 20 July 2015 10:43:04AM 12 points [-]
  • Just because a man has died for it, does not make it true.

Oscar Wilde

Comment author: Thomas 20 July 2015 10:08:52AM *  8 points [-]

I see, as many others may, that currently we are living in a NN (Neural Networks) renaissance. They are not as good as one may wish them to be, in fact sometimes they seem quite funny.

Still, after some unexpected advances from the last year onward, they look quite unstoppable to me. Further advances are plausible and their applications in playing the Go game for example, can bring us some very interesting advances and achievements. Even some big surprise is possible here.

Anybody else shares my view?

View more: Next