To me, it's about maximizing utility.
Would you want to be killed today ? That's how much you value life over non-existence.
How would you react if a loved one was to be killed today ? Same as above, that's how you value life over non-existence.
Almost everybody agree that life has value, considerable value, over non-existence. Hence considering some commonly agreed arbitrary utility function, giving life to somebody, giving existence to somebody, probably beats all the good deeds you could do in a lifetime, just as murder would probably beat all the good deeds you did over your lifetime.
The comfort of one's life is definitely important, but i'd bet the majority of depressive people still don't want to die. There's a large margin for life to become so invaluable that you'd want to die, and even then, you'd still have to consider this (usually) large positive part of your life where you still wanted to live.
Hence,
you should also actually create the utility for all these new lives
might not be a problem if life itself, if just being conscious has an almost infinite weight in most living conditions. In our arbitrary utility function, being an African kid rummaging a dump might have a weight of 1 000 000 while being a Finish kid born in a loving and wealthy family might have a weight of 1 100 000 at the very best (it could very well be lower depending on opinions and the kids trajectories).
it is possible that maximizing animal life, or perhaps alien or artificial life, would create more utility, as these lives might be optimized with way less effort
Whereas everybody agrees on the value of human life, not everybody will agree about the value of animal life (raise your hand if you ate chicken or beef or fish in the past weeks ?). Artificial life, certainly, unless the solution to the hard problem of consciousness rules out consciousness for some types of artificial life.
But, the way you stated your idea might not describe how people feel about this idea, instead of human life should be maximized, i lean more toward human life should not be minimized or it's a good thing to increase human life.
Hi, thanks tons for your interesting summary, even as STEM non-literate i believe i managed to grasp a few interesting bits !
I wondered, what's your intuition about the software/operating system side, as in the way data is handled/ordered at the subconscious level ? Wasteful or efficient ? Isn't that a key point that could render neuromorphic hardware obsolete, and perhaps suggest that AGI could run on current PCs as one commentator posited ?
As in, if i'm asking myself where i was yesterday, on a computer cognitive architecture, i'd just need to access a few pointers and do some dictionary searches and i'd get a working list of pointers in a minimal number of cycles, while my human mind seems to run a 100% CPU search with visual memories flooding back. I didn't ask for images or emotions or details of what was at the scene, i just needed a pointer to these places to be able to pronounce their name.
Isn't the human mind extremely limited by it's wavering attention span ? By its working memory ? By the way information is probably intricately linked with sensory experience ? How much compression / abstraction is taking place ?
Think about asking a computer to design a house. As a human i'd never even be able to hold the design in memory, i'd need a pen and paper and that'd considerably slow me down, and getting all the details correctly would ask me even more time. A computer probably could yield you a proper perfect design in a fraction of a second. Mental calculus is the same.
Yet, the 2000 TB of the mind seems like small number. How does the brain do that much with so little ?
There's also the question of people functioning normally after getting half of their brain removed, what if 1/4 was enough ? 1/8 ? That could be at least 1 OOM of inefficiency for brains in relation to normal human intelligence.
I just came here to clarify that ChatGPT can generalize to some extend and hence possess some sort of understanding of the natural world. Who are we to judge that his algorithms are intrinsically worse than ours and that he doesn't understand 'in the proper way' ?