Comment author: [deleted] 03 September 2013 12:29:35AM -1 points [-]

Thanks for the clarification!

I thought this was an engaging, well-written summary targeted to the general audience, and I'd like to encourage more articles along these lines. So as a follow-up question: How much income for MIRI would it take, per article, for the beneficial effects of sharing non-dangerous research to outweigh the negatives?

(Gah, the editor in me WINCES at that sentence. Is it clear enough or should I re-write? I'm asking how much I-slash-we should kick in per article to make the whole thing generally worth your while.)

Comment author: Simulation_Brain 03 September 2013 01:17:13AM 0 points [-]

Given how many underpaid science writers are out there, I'd have to say that ~50k/year would probably do it for a pretty good one, especially given the 'good cause' bonus to happiness that any qualified individual would understand and value. But is even 1k/week in donations realistic? What are the page view numbers? I'd pay $5 for a good article on a valuable topic; how many others would as well? I suspect the numbers don't add up, but I don't even have an order-of-magnitude estimate on current or potential readers, so I can't myself say.

Comment author: simplicio 21 August 2010 05:01:00AM *  9 points [-]

The real bone of contention here seems to be the long chain of inference leading from common scientific/philosophical knowledge to the conclusion that uFAI is a serious existential risk. Any particular personal characteristics of EY would seem irrelevant till we have an opinion on that set of claims.

If EY were working on preventing asteroid impacts with earth, and he were the main driving force behind that effort, he could say "I'm trying to save the world" and nobody would look at him askance. That's because asteroid impacts have definitely caused mass extinctions before, so nobody can challenge the very root of his claim.

The FAI problem, on the other hand, is at the top of a large house of inferential cards, so that Eliezer is saving the world GIVEN that W, X, Y and Z are true.

My bottom line: what we should be discussing is simply "Are W, X, Y and Z true?" Once we have a good idea about how strong that house of cards is, it will be obvious whether Eliezer is in a "permissible" epistemic state, or whatever.

Maybe people who know about these questions should consider a series of posts detailing all the separate issues leading to FAI. As far as I can tell from my not-extremely-tech-savvy vantage point, the weakest pillar in that house is the question of whether strong AI is feasible (note I said "feasible," not "possible").

Comment author: Simulation_Brain 23 August 2010 04:55:28AM *  2 points [-]

Upvoted; the issue of FAI itself is more interesting than whether Eliezer is making an ass of himself and thereby the SIAI message (probably a bit; claiming you're smart isn't really smart, but then he's also doing a pretty good job as publicist).

One form of productive self-doubt is to have the LW community critically examine Eliezer's central claims. Two of my attempted simplifications of those claims are posted here and here on related threads.

Those posts don't really address whether strong AI feasible; I think most AI researchers agree that it will become so, but disagree on the timeline. I believe it's crucial but rarely recognized that the timeline really depends on how many resources are devoted to it. Those appear to be steadily increasing, so it might not be that long.

Comment author: JoshuaZ 18 August 2010 03:28:59PM 0 points [-]

Thanks, it always is good to actually have input from people who work in a given field. So please correct me if I'm wrong but I'm under the impression that

1) neutral networks cannot in general detect connected components unless the network has some form of recursion. 2) No one knows how to make a neural network with recursion learn in any effective, marginally predictable fashion.

This is the sort of thing I was thinking of. Am I wrong about 1 or 2?

Comment author: Simulation_Brain 20 August 2010 08:58:47PM 1 point [-]

Not sure what you mean about by 1), but certainly, recurrent neural nets are more powerful. 2) is no longer true; see for example the GeneRec algorithm. It does something much like backpropagation, but with no derivatives explicitly calculated, there's no concern with recurrent loops.

On the whole, neural net research has slowed dramatically based on the common view you've expressed; but progress continues apace, and they are not far behind cutting edge vision and speech processing algorithms, while working much more like the brain does.

Comment author: Simulation_Brain 20 August 2010 06:02:43AM 5 points [-]

I think this is an excellent question. I'm hoping it leads to more actual discussion of the possible timeline of GAI.

Here's my answer, important points first, and not quite as briefly as I'd hoped.

1) even if uFAI isn't the biggest existential risk, the very low investment and interest in it might make it the best marginal value for investment of time or money. As someone noted, having at least a few people thinking about the risk far in advance seems like a great strategy if the risk is unknown.

2) No one but SIAI is taking donations to mitigate the risk (as far as I know) so your point 2 is all but immaterial right now.

3) I personally estimate the risk of uFAI to be vastly higher than any other, although I am as you point out quite biased in that direction. I don't think other existential threats come close (although I don't have the expertise to evaluate "gray goo" self replicator dangers) . a) AI is a new risk; (plagues and nuclear wars have failed to get us so far) b) it can be deadly in new ways (outsmarting/out-teching us); c) we don't know for certain that it won't happen soon.

How hard is AI? We actually don't know. I study not just the brain but how it gets computation and thinking done (a rare and fortunate job; most neuroscientists study neurons, not the whole mind) - and I think that its principles aren't actually all that complex. To put it this way: algorithms are rapidly approaching the human level in speech and vision, and the principles of higher-level thinking appear to be similar. (as an aside, EYs now-outdated Levels of General Intelligence does a remarkably good job of converging with my independently-developed opinion on principles of brain function) In my limited (and biased) experience, those with similar jobs tend to have similar opinions. But the bottom line is that we don't know either how hard, or how easy, it could turn out to be. Failure to this point is not strong evidence of continued failure.

And people will certainly try. The financial and power incentives are such that people will continue their efforts on narrow AI, and proceed to general AI when it helps solve problems. Recent military and intelligence grants indicate a trend in increasing interest in getting beyond narrow AI to get more useful AI; things that can make intelligence and military decisions and actions more cheaply (and eventually reliably) than a human. Industry similarly has a strong interest in narrow AI (e.g, sensory processing) but they will probably be a bit later to the GAI party given their track record of short term thinking. Academics are certainly are doing GAI research, in addition to lots of narrow AI stuff. Have a look at the BICA (biologically inspired cognitive architecture) conference for some academic enthusiasts with baby GAI projects.

So, it could happen soon. If it gets much smarter than us, it will do whatever it wants; and if we didn't build its motivational system veeery carefully, doing what it wants will eventually involve using all the stuff we need to live.

Therefore, I'd say the threat is on the order of 10-50%, depending on how fast it develops, how easy making GAI friendly turns out to be, and how much attention the issue gets. That seems huge relative to other truly existential threats.

If it matters, I believed very similar things before stumbling on LW and EY's writings.

I hope this thread is attracting some of the GAI sceptics; I'd like to stress-test this thinking.

Comment author: JoshuaZ 15 August 2010 10:31:10PM *  4 points [-]

In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.

One obvious piece of evidence is that many forms of narrow learning are mathematically incapable of doing much. There are for example a whole host of theorems about what different classes of neural networks can actually recognize, and the results aren't very impressive. Similarly, support vector machine's have a lot of trouble learning anything that isn't a very simple statistical model, and even then humans need to decide which stats are relevant. Other linear classifiers run into similar problems.

Comment author: Simulation_Brain 18 August 2010 06:20:49AM 3 points [-]

I work in this field, and was under approximately the opposite impression; that voice and visual recognition are rapidly approaching human levels. If I'm wrong and there are sharp limits, I'd like to know. Thanks!

Comment author: utilitymonster 13 August 2010 01:30:55PM 5 points [-]

a) Something much smarter than us will do whatever it wants, and very thoroughly. (this doesn't require godlike AI, just smarter than us. Self-improving helps, too.) b) The vast majority of possible "wants" done thoroughly will destroy us. (Any goal taken to extremes will use all available matter in accomplishing it.) Therefore, it will be dangerous if not VERY carefully designed. Humans are notably greedy and bad planners individually, and often worse in groups.

I've heard a lot of variations on this theme. They all seem to assume that the AI will be a maximizer rather than a satisficer. I agree the AI could be a maximizer, but don't see that it must be. How much does this risk go away if we give the AI small ambitions?

Comment author: Simulation_Brain 13 August 2010 07:27:19PM *  3 points [-]

Now this is an interesting thought. Even a satisficer with several goals but no upper bound on each will use all available matter on the mix of goals it's working towards. But a limited goal (make money for GiantCo, unless you reach one trillion, then stop) seems as though it would be less dangerous. I can't remember this coming up in Eliezer's CFAI document, but suspect it's in there with holes poked in its reliability.

Comment author: kodos96 13 August 2010 08:47:35AM 3 points [-]

The only part of the chain of logic that I don't fully grok is the "FOOM" part. Specifically, the recursive self improvement. My intuition tells me that an AGI trying to improve itself by rewriting its own code would encounter diminishing returns after a point - after all, there would seem to be a theoretical minimum number of instructions necessary to implement an ideal Bayesian reasoner. Once the AGI has optimized its code down to that point, what further improvements can it do (in software)? Come up with something better than Bayesianism?

Now in your summary here, you seem to downplay the recursive self-improvement part, implying that it would 'help,' but isn't strictly necessary. But my impression from reading Eliezer was that he considers it an integral part of the thesis - as it would seem to be to me as well. Because if the intelligence explosion isn't coming from software self-improvement, then where is it coming from? Moore's Law? That isn't fast enough for a "FOOM", even if intelligence scaled linearly with the hardware you threw at it, which my intuition tells me it probably wouldn't.

Now of course this is all just intuition - I haven't done the math, or even put a lot of thought into it. It's just something that doesn't seem obvious to me, and I've never heard a compelling explanation to convince me my intuition is wrong.

Comment author: Simulation_Brain 13 August 2010 07:22:22PM 2 points [-]

I think the concern stands even without a FOOM; if AI gets a good bit smarter than us, however that happens (design plus learning, or self-improvement), it's going to do whatever it wants.

As for your "ideal Bayesian" intuition, I think the challenge is deciding WHAT to apply it to. The amount of computational power needed to apply it to every thing and every concept on earth is truly staggering. There is plenty of room for algorithmic improvement, and it doesn't need to get that good to outwit (and out-engineer) us.

Comment author: Simulation_Brain 13 August 2010 07:56:24AM 5 points [-]

I think there are very good questions in here. Let me try to simplify the logic:

First, the sociological logic: if this is so obviously serious, why is no one else proclaiming it? I think the simple answer is that a) most people haven't considered it deeply and b) someone has to be first in making a fuss. Kurzweil, Stross, and Vinge (to name a few that have thought about it at least a little) seem to acknowledge a real possibility of AI disaster (they don't make probability estimates).

Now to the logical argument itself:

a) We are probably at risk from the development of strong AI. b) The SIAI can probably do something about that.

The other points in the OP are not terribly relevant; Eliezer could be wrong about a great many things, but right about these.

This is not a castle in the sky.

Now to argue for each: There's no good reason to think AGI will NOT happen within the next century. Our brains produce AGI; why not artificial systems? Artificial systems didn't produce anything a century ago; even without a strong exponential, they're clearly getting somewhere.

There are lots of arguments for why AGI WILL happen soon; see Kurzweil among others. I personally give it 20-40 years, even allowing for our remarkable cognitive weaknesses.

Next, will it be dangerous? a) Something much smarter than us will do whatever it wants, and very thoroughly. (this doesn't require godlike AI, just smarter than us. Self-improving helps, too.) b) The vast majority of possible "wants" done thoroughly will destroy us. (Any goal taken to extremes will use all available matter in accomplishing it.) Therefore, it will be dangerous if not VERY carefully designed. Humans are notably greedy and bad planners individually, and often worse in groups.

Finally, it seems that SIAI might be able to do something about it. If not, they'll at least help raise awareness of the issue. And as someone pointed out, achieving FAI would have a nice side effect of preventing most other existential disasters.

While there is a chain of logic, each of the steps seems likely, so multiplying probabilities gives a significant estimate of disaster, justifying some resource expenditure to prevent it (especially if you want to be nice). (Although spending ALL your money or time on it probably isn't rational, since effort and money generally have sublinear payoffs toward happiness).

Hopefully this lays out the logic; now, which of the above do you NOT think is likely?

Comment author: Simulation_Brain 25 June 2010 06:07:15PM 5 points [-]

I think the point is that not valuing non-interacting copies of oneself might be inconsistent. I suspect it's true; that consistency requires valuing parallel copies of ourselves just as we value future variants of ourselves and so preserve our lives. Our future selves also can't "interact" with our current self.

In response to comment by [deleted] on Defeating Ugh Fields In Practice
Comment author: [deleted] 20 June 2010 11:06:47PM 0 points [-]

I'm all glad it works for you, but 1) does not work if you don't start. 2) does not work if you care about quality (except if you're a genius).

In response to comment by [deleted] on Defeating Ugh Fields In Practice
Comment author: Simulation_Brain 21 June 2010 10:09:24PM 0 points [-]

Quality matters if you have a community that's interested in your work; you'll get more "nice job" comments if it IS a nice job.

View more: Prev | Next