Why expect AGIs to be better at thinking than human beings? Is there some argument that human thinking problems are primarily due to hardware constraints? Has anyone here put much thought into parenting/educating AGIs?
Why expect AGIs to be better at thinking than human beings? Is there some argument that human thinking problems are primarily due to hardware constraints? Has anyone here put much thought into parenting/educating AGIs?
I suspect this has been answered on here before in a lot more detail, but:
Also, specifically in AI, there is some precedent for there to be only a few years between "researchers get ...
I'm getting an error trying to load Lumifer's comment in the highly nested discussion, but I can see it in my inbox, so I'll try replying here without the nesting. For this comment, I will quote everything I reply to so it stands alone better.
Isn't it convenient that I don't have to care about these infinitely many theories?
why not?
Why not what?
Why don't you have to care about the infinity of theories?
you can criticize categories, e.g. all ideas with feature X
...How can you know that every single theory in that infinity has feature X? o
Has anyone here put much thought into parenting/educating AGIs?
I'm interested in General Intelligence Augmentation, what it would be like try and build/train an artificial brain lobe and try and make it part of a normal human intelligence.
I wrote a bit on my current thoughts on how I expect to align it using training/education here but watching this presentation is necessary for context.
Because
"[the brain] is sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation [...] 50000 times the thermodynamic minimum energy expenditure per binary swtich operation"
https://www.youtube.com/watch?v=EUjc1WuyPT8&t=3320s
AI will be quantitatively smarter because it'll be able to think over 10000 times faster (arbitrary conservative lower bound) and it will be qualitatively smarter because its software will be built by an algoirthm far better than evolution
I'm interested in smart weird people :-P
Oh, boy. We are having fundamental philosophical disagreements and you think dictionary definitions of things like "wrong" are adequate?
You say that philosophy is not falsifiable. OK, let's assume that for the time being. So can we apply the term "wrong" to some philosophies and "right" to others? On which basis? You will say "critical arguments". What is a critical argument? Within which framework are you going to evaluate them? You want "mistakes" pointed out to you. What kind of things will you accept as a "mistake" and what kind of things will you accept as indicating that it's valid?
I disagree that definitions are not all that important.
Well, obviously I think they are correct to some degree (remember, for me "truth" is not a binary category).
See above: what is a "mistake", given that we're deliberately ignoring empirical testing?
Things I'd like to learn are more like new to me frameworks, angles of view, reinterpretations of known facts. To use Scott Alexander's terminology, I want to notice concept-shaped holes.
Criteria of mistakes are themselves open to discussion. Some typical important ways to point out mistakes are:
1) internal contradictions, logical errors
2) non sequiturs
3) a reason X wouldn't solve problem Y, even though X is being offered as a solution to Y
4) an idea assumes/uses and also contradicts some context (e.g. background knowledge)
5) pointing out a contradiction with evidence
6) pointing out ambiguity, vagueness
there are many other types of critical arguments. for example, sometimes an argument, X, claims to refute Y, but X, if correct, refutes eve... (read more)