Why expect AGIs to be better at thinking than human beings? Is there some argument that human thinking problems are primarily due to hardware constraints? Has anyone here put much thought into parenting/educating AGIs?
Why expect AGIs to be better at thinking than human beings? Is there some argument that human thinking problems are primarily due to hardware constraints? Has anyone here put much thought into parenting/educating AGIs?
I suspect this has been answered on here before in a lot more detail, but:
Also, specifically in AI, there is some precedent for there to be only a few years between "researchers get ...
I'm getting an error trying to load Lumifer's comment in the highly nested discussion, but I can see it in my inbox, so I'll try replying here without the nesting. For this comment, I will quote everything I reply to so it stands alone better.
Isn't it convenient that I don't have to care about these infinitely many theories?
why not?
Why not what?
Why don't you have to care about the infinity of theories?
you can criticize categories, e.g. all ideas with feature X
...How can you know that every single theory in that infinity has feature X? o
Has anyone here put much thought into parenting/educating AGIs?
I'm interested in General Intelligence Augmentation, what it would be like try and build/train an artificial brain lobe and try and make it part of a normal human intelligence.
I wrote a bit on my current thoughts on how I expect to align it using training/education here but watching this presentation is necessary for context.
Because
"[the brain] is sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation [...] 50000 times the thermodynamic minimum energy expenditure per binary swtich operation"
https://www.youtube.com/watch?v=EUjc1WuyPT8&t=3320s
AI will be quantitatively smarter because it'll be able to think over 10000 times faster (arbitrary conservative lower bound) and it will be qualitatively smarter because its software will be built by an algoirthm far better than evolution
I'm not playing with words, I'm expressing the CR perspective. You apparently disagree, but if CR is correct then what I said is correct. So CR's correctness has consequences for your life.
I am not offering reductionism. Married people literally do things like discuss disagreements and try to solve problems – exactly the kind of thing CR governs. That doesn't mean CR is the only thing you need to know – you also need to know relationship-specific stuff (which you btw need to learn – and so CR is relevant there).
I think many ideas aren't models. This is a CR belief which would have impacts on your thinking if you understood it and decided it was correct.
Can you be more specific? How does anything I'm doing or saying clash with reality? Arguments about reality are totally welcome, and I've both sought them out and created them myself.
BTW CR philosopher David Deutsch is literally a founder of a parenting/educational movement. Here is one of my essays about CR and parenting: http://fallibleideas.com/taking-children-seriously
So what is the domain that CR claims? I thought it was merely epistemology, but apparently it includes marital counseling and parenting advice?
By the way, your style pattern-matches to religious proselytizing very well.
So far we had the underlying reality and imperfect representations thereof which we called "models". What is an "idea"?
You said
... (read more)