Hmm, so it is even more troubling, when eventually it does not end well, but initially it may seem like everything is fine.
To me that gives one more reason to why we should start experimenting with autonomous, unpredictable intelligent entities as soon as possible, and see if arrangements other than master-slave are possible.
Thank you. What a coincidence, huh?
I think you miss the point where gradual disempowerment from AI happens as AI is more economically and otherwise performant option that systems can and will select instead of humans. Less reliance on human involvement leads to less bargaining power for humans.
But I mean we already have examples like molochian corporate structures that kind of lost the need to value individual humans as they can afford high churn rate and there are always other people to get a decently paid corporate job even if the conditions are ... suboptimal.
If the current trajectory continues, it's not the case that the AI you have is a faithful representative of you, personally, run in your garage. Rather it seems there is a complex socio-economic process leading to the creation of the AIs, and the smarter they are, the more likely it is they were created by a powerful company or a government.
This process itself shapes what the AIs are "aligned" to. Even if we solve some parts of the technical alignment problem we still face the question of what is the sociotechnical process acting as “principal”.
This touche...
Hmm, seems like someone beat me to it. Federico Faggin describes the idea I had in mind with his Quantum Information Panpsychism theory. Check it out here if interested - and I'll appreciate your opinion on plausiblity of the theory.
https://www.essentiafoundation.org/quantum-fields-are-conscious-says-the-inventor-of-the-microprocessor/seeing/
I know this might not be a very satisfying response, but as extraordinary claims require extraordinary arguments, I'm going to need a series of posts to explain - hence the Substack.
Haha, I've noticed you reacted with "I'd bet this is false" - I would be quite willing to present my arguments and contrast them with yours, but ultimately this is philosophical belief and no conclusive evidence can be produced for either side (that we know of). Sorry if my comment was misleading.
Yes, I see connection with your section about digital people, and it is true that what I propose would make AI more compatible for merging with humans. But from my understanding I don't think "digital" people or consciousness can exist. I strongly disbelieve computational functionalism, and believe that consciousness is inherently linked to quantum particle wavefunction collapse. Therefore, I think that if we can recreate consciousness in machines it will always be bound to specialized non-deterministic hardware. I will be explaining my positions in more detail in my Substack.
Hmm... I see the connection to 1984. But isn't it useful having the words to spot something that is obviously already happening? (like drug clinical trial data being demonstrably presented or even altered with a certain malicious/monetary intent in mind)
Thank you for starting a discussion about this. I have two things to say:
1) In the post above, the "inftoxic" adjective means very much false, incorrect information. Additionally, it also means the falseness was intentionally "put in" the data or story with an intent to mislead, cause harm, etc. So, in fact, the term is different (and to me personally more useful) than the term "malinformation" (which I likewise find quite unhelpful).
2) Regardless of the usefulness of the terminology I used as an example, do you think that we could use new words in and around information, that could improve the way how we lead the debate in an attempt to be less wrong?
Thank you for such a detailed write-up! I have to admit that I am teetering on the issue whether to ban or not to ban open-source LLMs and as I a co-founder of an AI-for-drug-design startup I had taken the increased biosecurity risk as probably the single most important consideration. So I think the conversation sparked up by your post is quite valuable.
That said, even if I consider all that you presented, I am still leaning towards banning powerful open-source LLMs, at least until we get much more information and most importantly before we establish other...
Responding to your last sentence: one thing I see as a cornerstone of biomimetic AI architectures I propose is the non-fungibility of digital minds. By being hardware-bound, humans could have an array of fail-safes to actually shut such systems down (in addition to other very important benefits like reduced copy-ability and recursive self-improvement).
In one way, of course this will not prevent covert influence and power accumulation etc. but one can argue such things are already quite prevalent in human society. So if the human-AI equilibrium stabilizes in AIs being extremely influential yet "overthrowable" if they obviously overstep, then I think this could be acceptable.