All of Bgoertzel's Comments + Replies

Stuart -- Yeah, the line of theoretical research you suggest is worthwhile....

However, it's worth noting that I and the other OpenCog team members are pressed for time, and have a lot of concrete OpenCog work to do. It would seem none of us really feels like taking a lot of time, at this stage, to carefully formalize arguments about what the system is likely to do in various situations once it's finished. We're too consumed with trying to finish the system, which is a long and difficult task in itself...

I will try to find some time in the near term to ... (read more)

Thanks for sharing your personal feeling on this matter. However, I'd be more interested if you had some sort of rational argument in favor of your position!

The key issue is the tininess of the hyperbubble you describe, right? Do you have some sort of argument regarding some specific estimate of the measure of this hyperbubble? (And do you have some specific measure on mindspace in mind?)

To put it differently: What are the properties you think a mind needs to have, in order for the "raise a nice baby AGI" approach to have a reasonable chance of effectiveness? Which are the properties of the human mind that you think are necessary for this to be the case?

Well, consider this: it takes only a very small functional change to the human brain to make 'raising it as a human child' a questionable strategy at best. Crippling a few features of the brain produces sociopaths who, notably, cannot be reliably inculcated with our values, despite sharing 99.99etc% of our own neurological architecture.

Mind space is a tricky thing to pin down in a useful way, so let's just say the bubble is really tiny. If the changes your making are larger than the changes between a sociopath and a neurotypical human, then you should... (read more)

ewbrownv160

Human children respond to normal child-rearing practices the way they do because of specific functional adaptations of the human mind. This general principle applies to everything from language acquisition to parent-child bonding to acculturation. Expose a monkey, dog, fish or alien to the same environment, and you'll get a different outcome.

Unfortunately, while the cog sci community has produced reams of evidence on this point they've also discovered that said adaptations are very complex, and mapping out in detail what they all are and how they work is t... (read more)

7Stuart_Armstrong
I think some cross cultural human studies might be a way of starting to answer this question. Looking at autists, or other non-neurotypical minds, would also be helpful. Studying sociopaths or psychopaths would also be important (they pass our society's behaviour filters, and yet misbehave). The errors of early AGIs (as long as they're left unpatched!!!) will also be very revealing, and let us try and trace the countours of non-human minds, and get insights into human minds as well. Formal philosophical measures (what kind of consistent long term behaviours can exist in theory?) may also help. More ideas will no doubt spring to mind - if you want, we can design a research program!
Bgoertzel130

Stuart: The majority of people proposing the "bringing up baby AGI" approach to encouraging AGI ethics, are NOT making the kind of naive cognitive error you describe here. This approach to AGI ethics is not founded on naive anthropomorphism. Rather, it is based on the feeling of having a mix of intuitive and rigorous understanding of the AGI architectures in question, the ones that will be taught ethics.

For instance, my intuition is that if we taught an OpenCog system to be loving and ethical, then it would very likely be so, according to broa... (read more)

2nigerweiss
An AGI that is not either deeply neuromorphic or possessing a well-defined and formally stable utility function sounds like... frankly one of the worst ideas I've ever heard. I'm having difficulty imagining a way you could demonstrate the safety of such a system, or trust it enough at any point to give it enough resources to learn. Considering that the fate of intelligent life in our future light cone may hang in the balance, standards of safety must obviously be very high! Intuition is, I'm sorry, simply not an acceptable criteria on which to wager at least billions, and perhaps trillions of lives. The expected utility math does not wash if you actually expect OpenCog to work. On a more technical level, human values are broadly defined as some function over a typical human brain. There may be some (or many) optimizations possible, but not such that we can rely on them. So, for a really good model of human values, we should not expect to need less than the entropy of a human brain. In other words, nobody, whether they're Eliezer Yudkowsky with his formalist approach or you, is getting away with less than about ten petabytes of good training samples. Those working on uploads can skip this step entirely, but neuromorphic AI is likely to be fundamentally less useful. And this assumes that every bit of evidence can be mapped directly to a bit in a typical human brain map. In reality, for a non-FOOMed AI, the mapping it likely to be many orders of magnitude less efficient. I suspect, but cannot demonstrate right now, that a formalist approach starting with a clean framework along the lines of AIXI is going to be more efficient. Quite aside from that, even assuming you can acquire enough data to train your machine reliably, then you still need it to do... something. Human values include a lot of unpleasant qualities. Simply giving it human values and then allowing it to grow to superhuman intellect is grossly unsafe. Ted Bundy had human values. If your plan is to train
9Dr_Manhattan
Ben, your response is logical (if not correct), but the fact that many AI researchers advocate the "upbringing approach" (for other architectures) makes me very suspicious that they're anthropomorphising after all.

Thanks for your answer, Ben!

First of all, all of these methods involve integrating the AGI in human society. So the AGI is forming its values, at least in part, through doing something (possibly talking) and getting a response from some human. That human will be interpreting the AGI's answers, and selecting the right response, using their own theory of the AGI's mind - nearly certainly an anthopomorphisation! Even if that human develops experience dealing with the AGI, their understanding will be limited (as our understanding of other humans is limited, ex... (read more)

Bgoertzel210

So, are you suggesting that Robin Hanson (who is on record as not buying the Scary Idea) -- the current owner of the Overcoming Bias blog, and Eli's former collaborator on that blog -- fails to buy the Scary Idea "due to cognitive biases that are hard to overcome." I find that a bit ironic.

Like Robin and Eli and perhaps yourself, I've read the heuristics and biases literature also. I'm not so naive as to make judgments about huge issues, that I think about for years of my life, based strongly on well-known cognitive biases.

It seems more plausible... (read more)

pjeby110

So, are you suggesting that Robin Hanson (who is on record as not buying the Scary Idea) -- the current owner of the Overcoming Bias blog, and Eli's former collaborator on that blog -- fails to buy the Scary Idea "due to cognitive biases that are hard to overcome." I find that a bit ironic

Welcome to humanity. ;-) I enjoy Hanson's writing, but AFAICT, he's not a Bayesian reasoner.

Actually: I used to enjoy his writing more, before I grokked Bayesian reasoning myself. Afterward, too much of what he posts strikes me as really badly reasoned, ev... (read more)

wedrifid110

Regarding your final paragraph: Is your take on the debate between Robin and Eli about "Foom" that all Robin was saying boils down to "la la la I can't hear you" ?

Good summary. Although I would have gone with "la la la la If you're right then most of expertise is irrelevant. Must protect assumptions of free competition. Respect my authority!"

What I found most persuasive about that debate was Robin's arguments - and their complete lack of merit. The absence of evidence is evidence of absence when there is a motivated competent debater with an incentive to provide good arguments.

2JamesAndrix
I recall getting a distinct impression from Robin which I could caricature as "lalala you're biased with hero-epic story." I also recall Eliezer asking for a probability breakdown, and I don't think Robin provided it.
Bgoertzel110

I agree that a write-up of SIAI's argument for the Scary Idea, in the manner you describe, would be quite interesting to see.

However, I strongly suspect that when the argument is laid out formally, what we'll find is that

-- given our current knowledge about the pdf's of the premises in the argument, the pdf on the conclusion is verrrrrrry broad, i.e. we can't conclude hardly anything with much of any confidence ...

So, I think that the formalization will lead to the conclusion that

-- "we can NOT confidently say, now, that: Building advanced AGI with... (read more)

8CarlShulman
I agree with both those statements, but think the more relevant question would be: "conditional on it turning out, to the enormous surprise of most everyone in AI, that this AGI design is actually very close to producing an 'artificial toddler', what is the sign of the expected effect on the probability of an OK outcome for the world, long-term and taking into account both benefits and risks?" .
-8nick012000
3MatthewB
I agree. I doubt you would remember this, but we talked about this at the Meet and Greet at the Singularity Summit a few months ago (in addition to CBGBs and Punk Rock and Skaters). James Hughes mentioned you as well at a Conference in NY where we discussed this very issue as well. One thing that you mentioned at the Summit (well in conversation) was that The Scary Idea was tending to cause some paranoia among people who otherwise might be contributing more to the development of AI (of course, you also seemed pretty hostile to brain emulation too) as it tends to cause funding that could be going to AI to be slowed as a result.

I have thought a bit about these decision theory issues lately and my ideas seem somewhat similar to yours though not identical; see

http://goertzel.org/CounterfactualReprogrammingDecisionTheory.pdf

if you're curious...

-- Ben Goertzel

0timtyler
It's the "do what a superintelligence would do" decision theory!!!