Thanks for sharing your personal feeling on this matter. However, I'd be more interested if you had some sort of rational argument in favor of your position!
The key issue is the tininess of the hyperbubble you describe, right? Do you have some sort of argument regarding some specific estimate of the measure of this hyperbubble? (And do you have some specific measure on mindspace in mind?)
To put it differently: What are the properties you think a mind needs to have, in order for the "raise a nice baby AGI" approach to have a reasonable chance of effectiveness? Which are the properties of the human mind that you think are necessary for this to be the case?
Well, consider this: it takes only a very small functional change to the human brain to make 'raising it as a human child' a questionable strategy at best. Crippling a few features of the brain produces sociopaths who, notably, cannot be reliably inculcated with our values, despite sharing 99.99etc% of our own neurological architecture.
Mind space is a tricky thing to pin down in a useful way, so let's just say the bubble is really tiny. If the changes your making are larger than the changes between a sociopath and a neurotypical human, then you should...
Human children respond to normal child-rearing practices the way they do because of specific functional adaptations of the human mind. This general principle applies to everything from language acquisition to parent-child bonding to acculturation. Expose a monkey, dog, fish or alien to the same environment, and you'll get a different outcome.
Unfortunately, while the cog sci community has produced reams of evidence on this point they've also discovered that said adaptations are very complex, and mapping out in detail what they all are and how they work is t...
Stuart: The majority of people proposing the "bringing up baby AGI" approach to encouraging AGI ethics, are NOT making the kind of naive cognitive error you describe here. This approach to AGI ethics is not founded on naive anthropomorphism. Rather, it is based on the feeling of having a mix of intuitive and rigorous understanding of the AGI architectures in question, the ones that will be taught ethics.
For instance, my intuition is that if we taught an OpenCog system to be loving and ethical, then it would very likely be so, according to broa...
Thanks for your answer, Ben!
First of all, all of these methods involve integrating the AGI in human society. So the AGI is forming its values, at least in part, through doing something (possibly talking) and getting a response from some human. That human will be interpreting the AGI's answers, and selecting the right response, using their own theory of the AGI's mind - nearly certainly an anthopomorphisation! Even if that human develops experience dealing with the AGI, their understanding will be limited (as our understanding of other humans is limited, ex...
So, are you suggesting that Robin Hanson (who is on record as not buying the Scary Idea) -- the current owner of the Overcoming Bias blog, and Eli's former collaborator on that blog -- fails to buy the Scary Idea "due to cognitive biases that are hard to overcome." I find that a bit ironic.
Like Robin and Eli and perhaps yourself, I've read the heuristics and biases literature also. I'm not so naive as to make judgments about huge issues, that I think about for years of my life, based strongly on well-known cognitive biases.
It seems more plausible...
So, are you suggesting that Robin Hanson (who is on record as not buying the Scary Idea) -- the current owner of the Overcoming Bias blog, and Eli's former collaborator on that blog -- fails to buy the Scary Idea "due to cognitive biases that are hard to overcome." I find that a bit ironic
Welcome to humanity. ;-) I enjoy Hanson's writing, but AFAICT, he's not a Bayesian reasoner.
Actually: I used to enjoy his writing more, before I grokked Bayesian reasoning myself. Afterward, too much of what he posts strikes me as really badly reasoned, ev...
Regarding your final paragraph: Is your take on the debate between Robin and Eli about "Foom" that all Robin was saying boils down to "la la la I can't hear you" ?
Good summary. Although I would have gone with "la la la la If you're right then most of expertise is irrelevant. Must protect assumptions of free competition. Respect my authority!"
What I found most persuasive about that debate was Robin's arguments - and their complete lack of merit. The absence of evidence is evidence of absence when there is a motivated competent debater with an incentive to provide good arguments.
I agree that a write-up of SIAI's argument for the Scary Idea, in the manner you describe, would be quite interesting to see.
However, I strongly suspect that when the argument is laid out formally, what we'll find is that
-- given our current knowledge about the pdf's of the premises in the argument, the pdf on the conclusion is verrrrrrry broad, i.e. we can't conclude hardly anything with much of any confidence ...
So, I think that the formalization will lead to the conclusion that
-- "we can NOT confidently say, now, that: Building advanced AGI with...
I have thought a bit about these decision theory issues lately and my ideas seem somewhat similar to yours though not identical; see
http://goertzel.org/CounterfactualReprogrammingDecisionTheory.pdf
if you're curious...
-- Ben Goertzel
Stuart -- Yeah, the line of theoretical research you suggest is worthwhile....
However, it's worth noting that I and the other OpenCog team members are pressed for time, and have a lot of concrete OpenCog work to do. It would seem none of us really feels like taking a lot of time, at this stage, to carefully formalize arguments about what the system is likely to do in various situations once it's finished. We're too consumed with trying to finish the system, which is a long and difficult task in itself...
I will try to find some time in the near term to ... (read more)