Why expect AGIs to be better at thinking than human beings? Is there some argument that human thinking problems are primarily due to hardware constraints? Has anyone here put much thought into parenting/educating AGIs?
Why expect AGIs to be better at thinking than human beings? Is there some argument that human thinking problems are primarily due to hardware constraints? Has anyone here put much thought into parenting/educating AGIs?
I suspect this has been answered on here before in a lot more detail, but:
Also, specifically in AI, there is some precedent for there to be only a few years between "researchers get ...
I'm getting an error trying to load Lumifer's comment in the highly nested discussion, but I can see it in my inbox, so I'll try replying here without the nesting. For this comment, I will quote everything I reply to so it stands alone better.
Isn't it convenient that I don't have to care about these infinitely many theories?
why not?
Why not what?
Why don't you have to care about the infinity of theories?
you can criticize categories, e.g. all ideas with feature X
...How can you know that every single theory in that infinity has feature X? o
Has anyone here put much thought into parenting/educating AGIs?
I'm interested in General Intelligence Augmentation, what it would be like try and build/train an artificial brain lobe and try and make it part of a normal human intelligence.
I wrote a bit on my current thoughts on how I expect to align it using training/education here but watching this presentation is necessary for context.
Because
"[the brain] is sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation [...] 50000 times the thermodynamic minimum energy expenditure per binary swtich operation"
https://www.youtube.com/watch?v=EUjc1WuyPT8&t=3320s
AI will be quantitatively smarter because it'll be able to think over 10000 times faster (arbitrary conservative lower bound) and it will be qualitatively smarter because its software will be built by an algoirthm far better than evolution
I'm getting an error trying to load Lumifer's comment in the highly nested discussion, but I can see it in my inbox, so I'll try replying here without the nesting. For this comment, I will quote everything I reply to so it stands alone better.
Why don't you have to care about the infinity of theories?
It depends which infinity we're talking about. Suppose the problem is persuading LW ppl about Paths Forward and you say "Use a shovel". That refers to infinitely many different potential solutions. However, they can be criticized as a group by pointing out that a shovel won't help solve the problem. What does a shovel have to do with it? Irrelevant!
This criticism only applies to the infinite category of ideas about shovels, not everything. I'm able to criticize that whole infinite group as a unit because it was brought up as a unit, and defined according to having a particular feature for all the theories in the group (that they involve trying to solve the problem specifically with a shovel.)
The criticism is also contextual. It relates to using shovels for this particular problem. But shovels still help with some other problems. The context the criticism works in is broader than the single problem about paths forward persuasion of LW ppl – e.g. it also applies to anti-induction persuasion of Objectivists. This is typical – the point has some applicability to multiple contexts, but not universal applicability.
If you instead said "Do something" then you'd be bringing up a different infinity with more stuff in it, and I'd have a different reply: "Do what? That isn't helpful because you're pointing me to a large number of non-solutions without pointing out any solution. I agree there is a solution contained in there, somewhere, but I don't know what it is, and you don't seem to either, so I can't use it currently. So I'm stuck with the regular options like doing a solution I do know of or spending more time looking for solutions."
I will admit that there may be a solution with a shovel that actually would work (one way to get this is to take some great solution and then tack on a shovel, which is not optimal but may still be way better than anything we currently know of). So my criticism doesn't 100% rule shovels out. However, it rules shovels out for the time being, as far as is known, pending a new idea about how to make a shovel work. We can only act on solutions we know of, and I have a criticism of the shovel category of ideas as we currently understand it. Our current understanding is that shovels help us dig, and can be used as weapons, and can be salvaged for resources like wood and metal, and can be sold, but that just vaguely saying "use a shovel somehow" does not help me solve a problem of intellectually persuading people.
I don't think humans think like rats, and I propose we don't debate animal "intelligence" at this time. I'll try to speak to the issue in a different way.
I think humans have adequate control over their observing that they don't get stuck and unable to make progress due to built-in biases and errors. For example, people can consciously think "that looked like a dog at first glance, but actually isn't a painting of a dog". So you can put thought into what the entities are. To the extent you have a default, you can partly change what that default is, and partly reinterpret it after doing the observation. And you're capable of observing in a sufficiently non-lossy way to get whatever information you need (at least with tools like microscopes for some cases). You aren't just inherently, permanently blind to some ways of dividing up the world into entities, or some observable things.
And whatever default your genes gave you about entities is not super reliable. It may be pretty good, but it's very much capable of errors. So I'll make a weaker claim: you can't infallibly observe entities. You need to put some actual thought into what the entities are and aren't, and the inductivist perspective doesn't address this well. (As to rats, they actually start making gross errors in some situations, due to their inability to think like a human to deal with situations they weren't evolved for.)
but when two ways of thinking about entities (or, a third option, not thinking about entities at all) give identical predictions, then you said it doesn't matter which you do? one entity (or none) is as good as another as long as the predictions come out the same?
but i don't think all ways of looking at the world in terms of entities are equally convenient for aiding us in making predictions (or for some other important things like coming up with new hypotheses!)
Huh, that shaft ended in loud screech and a clang... Let's drop another shaft!
I don't have to care about the infinity of theories because if they all make exactly the same predictions, I don't care that they are different.
This is highly convenient because I am, to quote an Agent, "only human" and humans are not well set up to deal with infinities.
How do you know that without examining the sp... (read more)