New to LessWrong?

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 2:58 AM

This post got me thinking about why falsificationism, while possibly not the best epistemology out there, is great for human scientists.

1) Thinking about how to disprove your theory counteracts confirmation bias

2) Figuring out how to disprove your theory and doing that test and finding your theory wrong means you don't have to spend $25M dollars on a project that may or may not go anywhere.

Hmm, the appeal of getting eaten first...

I'm not even that optimistic about 'success'. It's actually feels a little weird how much my thinking on AI has changed. A few years ago a digital toddler might have seemed like a sensible approach.

What seems nonsensible about it now? (I presume you're meaning in the sense of even succeeding at the AGI goal, not in the sense of succeeding in the Friendliness goal.)

If you think you understand what's needed to make a Human Level AI, then you shouldn't need a five step plan. (at least not with these steps) If you expect to learn anything important from the toddler stage that will let you move towards the adult stage, then you already know you don't understand the problem.

From: http://opencog.org/faq/

Q: What’s the Secret Sauce?

In a phrase: cognitive synergy.

The insight here is that many parts need to work together.

Setting the "toddler" target makes it seem like you're breaking the problem down into a more manageable chunk, but it's actually at least as large as the original problem. The village idiot and Einstein are very close together on the spectrum that includes Dogs, chimps, and superhuman AI, and I think a 4 year old might be above the village idiot. If you can do that, just finish it.

4-year old level problem solving ability at 4-year old speeds is a severely anthropomorphic prediction of a designs abilities. If you could do that, why not crank up the speed (at least) and get a thing that can do real work. Perhaps still conceptually simple by adult standards, but way ahead of current bots. You could almost certainly get through very complex problems if you could give instructions to an immortal toddler.

There is no such thing as a digital toddler that is not a recompile away from superhuman AI. I'm guessing that this plan stems from some kind of humility, or not wanting to fail. It feels easier to make what you think is a weak intellect. Given the existing virtual dog, it might feel like they're making progress. It would certainly be possible to make a toddler that is increasingly convincing as a toddler.

This is wasting a bunch of effort on Machine vision, NLP, and dancing robots, which I think do not feed into general intelligence. If you're convinced friendliness isn't important, and you need a hard sample problem for your AI, pick cancer, not cyberchat.

I figured the plan comes from how human intelligence seems to be built up in two stages, first the genetics-driven fetal brain formation without significant any significant sensory input to drive things, then the long slog of picking up patterns from loads and loads of noisy sensory data from toddlerhood onward.

Working from this guess, anatomical differences between the brain of a toddler and the brain of an adult aren't important here, the point is that after the "toddler" stage, a human-inspired AI design will learn the stuff it needs by processing sensory input, not necessarily with additional brain structure engineering.

There may be a case to be made why such a human-inspired design is not the way to go, but you seem to be arguing against an AI that's designed to stay at the level of a toddler, instead of proceeding to learn from what it observes to develop towards adult intelligence like real toddlers do.

Why should it take an AI design more than five minutes to process a significant enough amount of rich sensory data to push it to an adult level?

Upon review, I think you are correct that this plan does think it's solving all the major programming problems by the toddler stage, and the rest is just education.

But the question is still: If you understand how to make intelligence, why make one that's even nearly as low as us?

Why should it take an AI design more than five minutes to process a significant enough amount of rich sensory data to push it to an adult level?

I don't know, I have never built an AGI. One possibility is that the design isn't one that just needs to chew on static data to learn, and instead it needs to try to interact with existing intelligent entities with its first guesses at behavior and then use the feedback to refine its behavior, like human children do. This would slow down initial takeoff since the only intelligent entities initially available are slow live humans. Things could of course be sped up once there's an ecosystem of adult-level intelligent AIs that could be run as fast as the computers are able.

As for the five minute figure, let's say humans need about 0.2e9 seconds of interacting with their surroundings to grow up to a semi-adult level. Let's go with Ray Kurzweil's estimate of 1e16 computations per second needed for a human brain inspired AI. Five minutes would require an order of 1e22 cps hardware. Moore's law isn't quite there yet, I think current fastest computers are somewhere around 1e15 range. So you'd have to be able to shave quite a few orders of magnitude from the back of the envelope estimation right off the cuff. I could also do back of the envelope nastiness with the human sensory bandwidth times 0.2 gigaseconds and how many bits you can push to a CPU in a second, but you probably get the idea.

So it seems to come down to at least how simple or messy the basic discovered AI architecture ends up being. A hard takeoff from the first genuine AGI architecture to superhuman intelligence in days requires an architecture something like ten orders of magnitude more efficient than humans. That needs to be not only possible, but discoverable at the stage of doing the first prototypes of AGI.

The required minimum complexity for an AGI is a pretty big unknown right now, so I'm going with the human-based estimations for the optimistic predictions where the AGI won't kill everyone. Hard takeoff is of course worth considering for the worst-case scenario predictions.

First, I want to say I think that was a really good response.

One possibility is that the design isn't one that just needs to chew on static data to learn, and instead it needs to try to interact with existing intelligent entities with its first guesses at behavior and then use the feedback to refine its behavior, like human children do.

I think that this is somehow muddling the notions of intelligence and learning-the-problem, but I don't have it pinned down at the moment. Feeding training data to an AI should only be needed if the programmers are ignorant of the relevant patterns which will be produced in the mature AI. If the adult AI is actually smarter than the best AI the programmers could put out (the toddler) then something changed which would correspond to a novel AI design principle. But all parties might still be ignorant of that principle, if for example it occurred one day when a teach stumbled on to the right way to explain a concept to the toddler, but it wasn't obvious how that tied in to the new more efficient data structure in the toddler's mind.

Because if you out those things that the AI child would learn that would turn it into an AI scientist, then you could just create those structures.

So this isn't "let's figure out AI as we go along" as I originally thought, but "Let's automate the process of figuring out AI" Which is more dangerous and probably more likely to succeed. So I'm updating in the direction of Vladimir_Nesov's position, but this strategy is still dependent on not knowing what you're doing.

I could get 4 of orders of magnitude by retreating to 5 weeks instead of five minutes., and that's still a hard takeoff. I also think it would be relatively easy to get at least an order of magnitude speedup over human learning just by tweaking what gets remembered and what gets forgotten.

That's probably the real danger here: You learn more about what's really important to intelligence, and then you make your program a little better, or it just learns and gets a little better, and you celebrate. You don't at that point suddenly realize that you built an AI without knowing how it works. So you gradually go from a design that needs a small cluster just to act like a dog to a design that's twice as smart as your desktop's idle cycles, but you never panic and reevaluate friendliness.

The same kind of friendliness problem could happen with whole brain emulations.

[-][anonymous]13y20

Ben Goertzel responds to Friendliness concerns in a new blog post.

I sincerely believe I have a recipe for creating a human-level thinking machine! In an ethical way, and with computing resources currently at our disposal.

I'm a little bit suspicious, but in an ethical way? Reminds of the an argument by Greg Egan:

What I regret most is my uncritical treatment of the idea of allowing intelligent life to evolve in the Autoverse. Sure, this is a common science-fictional idea, but when I thought about it properly (some years after the book was published), I realised that anyone who actually did this would have to be utterly morally bankrupt. To get from micro-organisms to intelligent life this way would involve an immense amount of suffering, with billions of sentient creatures living, struggling and dying along the way. Yes, this happened to our own ancestors, but that doesn't give us the right to inflict the same kind of suffering on anyone else.

This is potentially an important issue in the real world. It might not be long before people are seriously trying to “evolve” artificial intelligence in their computers. Now, it's one thing to use genetic algorithms to come up with various specialised programs that perform simple tasks, but to “breed”, assess, and kill millions of sentient programs would be an abomination. If the first AI was created that way, it would have every right to despise its creators. [The Dust Theory: FAQ]

I want to highlight the difficulties involved in some other problem besides AGI, namely P vs. NP:

P vs. NP is an absolutely enormous problem, and one way of seeing that is that there are already vastly, vastly easier questions that would be implied by P not equal to NP but that we already don’t know how to answer. So basically, if someone is claiming to prove P not equal to NP, then they’re sort of jumping 20 or 30 nontrivial steps beyond what we know today. (...) We have very strong reasons to believe that these problems cannot be solved without major — enormous — advances in human knowledge. (...) So in order to prove such a thing, a prerequisite to it is to understand the space of all possible efficient algorithms. That is an unbelievably tall order. So the expectation is that on the way to proving such a thing, we’re going to learn an enormous amount about efficient algorithms, beyond what we already know, and very, very likely discover new algorithms that will likely have applications that we can’t even foresee right now. [3 questions: P vs. NP.]

So AGI is that much easier to solve than a computational problem like P vs. NP that I am to believe Ben Goertzel here? Besides, take for example a constrained well-understood domain like Go. AI does still perform awfully at Go. So far I believed that it would take at least one paradigm-shattering conceptual revolution before someone comes up with AGI. Sure, you don't have to solve any problem but that of a AGI. I'm completely unable to judge any claims here, still by gut feeling that is questionable.

Also if this is true, then what about friendly AI? Is he claiming that he solved the problem of TDT as well?

I believe I once read that Marvin Minsky basically claims the same, with enough money he can build an AGI.

This gives me a new approach to the self-enhancement problem: Use OpenCog to tackle P vs NP. That is, use OpenCog tools to develop a system capable of representing and "thinking about" the problems of computational complexity theory. The models of self-enhancement that we have now, like Schmidhuber's Godel machine, are like AIXI, they are brute-force starting points that might take forever to pick up speed. But if we design a system specifically to tackle the advanced problems of theoretical computer science, it will start out with concepts and heuristics likely to assist efficient self-enhancement, rather than having to discover all of them by itself.

Re: Greg Egan

Before the test, an ensemble of copies of the AGI would be created, with identical knowledge state. Each copy would interact with a different human teacher, who would demonstrate to it a certain behavior.

...

The multiple copies may, depending on the AGI system design, then be able to be reintegrated,

From: http://wiki.opencog.org/wikihome/images/3/39/Preschool.pdf

So let's make multiple divergent copies per day, and maybe "re-integrate" them if we decide to design for that.