Why expect AGIs to be better at thinking than human beings? Is there some argument that human thinking problems are primarily due to hardware constraints? Has anyone here put much thought into parenting/educating AGIs?
Why expect AGIs to be better at thinking than human beings? Is there some argument that human thinking problems are primarily due to hardware constraints? Has anyone here put much thought into parenting/educating AGIs?
I suspect this has been answered on here before in a lot more detail, but:
Also, specifically in AI, there is some precedent for there to be only a few years between "researchers get ...
I'm getting an error trying to load Lumifer's comment in the highly nested discussion, but I can see it in my inbox, so I'll try replying here without the nesting. For this comment, I will quote everything I reply to so it stands alone better.
Isn't it convenient that I don't have to care about these infinitely many theories?
why not?
Why not what?
Why don't you have to care about the infinity of theories?
you can criticize categories, e.g. all ideas with feature X
...How can you know that every single theory in that infinity has feature X? o
Has anyone here put much thought into parenting/educating AGIs?
I'm interested in General Intelligence Augmentation, what it would be like try and build/train an artificial brain lobe and try and make it part of a normal human intelligence.
I wrote a bit on my current thoughts on how I expect to align it using training/education here but watching this presentation is necessary for context.
Because
"[the brain] is sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation [...] 50000 times the thermodynamic minimum energy expenditure per binary swtich operation"
https://www.youtube.com/watch?v=EUjc1WuyPT8&t=3320s
AI will be quantitatively smarter because it'll be able to think over 10000 times faster (arbitrary conservative lower bound) and it will be qualitatively smarter because its software will be built by an algoirthm far better than evolution
I bet it does. What do you do and what are some of your main philosophical beliefs which you would think it's important if they're mistaken? (I'll be happy to answer the same question though not with any use of pointers to my websites banned.)
I reviewed all the well known options (and some but not all obscure ones – and I don't mind reviewing more obscure ones when someone interested in conversation brings one up) and made a judgement about which is correct and non-refuted, and that all the others are refuted by arguments I know. In epistemology, that one is CR.
I would expect other people to attempt something like this, but I find they normally haven't – and don't want to begin. Does this sort of project interest you? If not, what sort of truth-seeking does interest you?
And if you want me to put in extra work to use fewer references than I normally would – do you have any value to offer to motivate me to do this? For example, do you think you'll continue the conversation to a conclusion? Most people don't, and I currently don't expect you to, and I'd rather not jump through a bunch of hoops for you and then you just stop responding.
What exactly is the falsifiable claim that you're making and how would you expect it to be falsified? :-)
Oh, there are lot. Existence of afterlife, for example. The nature of morality. Things like that.
How confident are you of your judgement?
Not particularly because of lack ... (read more)