All of Jáchym Fibír's Comments + Replies

Responding to your last sentence: one thing I see as a cornerstone of biomimetic AI architectures I propose is the non-fungibility of digital minds. By being hardware-bound, humans could have an array of fail-safes to actually shut such systems down (in addition to other very important benefits like reduced copy-ability and recursive self-improvement). 

In one way, of course this will not prevent covert influence and power accumulation etc. but one can argue such things are already quite prevalent in human society. So if the human-AI equilibrium stabilizes in AIs being extremely influential yet "overthrowable" if they obviously overstep, then I think this could be acceptable.

Hmm, so it is even more troubling, when eventually it does not end well, but initially it may seem like everything is fine.

To me that gives one more reason to why we should start experimenting with autonomous, unpredictable intelligent entities as soon as possible, and see if arrangements other than master-slave are possible.

3AnthonyC
In some senses, we have done so many times, with human adults of differing intelligence and/or unequal information access, with adults and children, with humans and animals, and with humans and simpler autonomous systems (like sprites in games, or current robotic systems). Many relationships other than master-slave are possible, but I'm not sure any of the known solutions are desirable, and they're definitely not universally agreed on as desirable. We can be the AI's servants, children, pets, or autonomous-beings-within-strict-bounds-but-the-AI-can-shut-us-down-or-take-us-over-at-will. It's much less clear to me that we can be moral or political or social peers in a way that is not a polite fiction.
2dr_s
I guess! I remember he was always into theoretical QM and "Quantum Foundations" so this is not a surprise. It's not a particularly big field either, most researchers prefer focusing on less philosophical aspects of the theory.

I think you miss the point where gradual disempowerment from AI happens as AI is more economically and otherwise performant option that systems can and will select instead of humans. Less reliance on human involvement leads to less bargaining power for humans. 

But I mean we already have examples like molochian corporate structures that kind of lost the need to value individual humans as they can afford high churn rate and there are always other people to get a decently paid corporate job even if the conditions are ... suboptimal.

If the current trajectory continues, it's not the case that the AI you have is a faithful representative of you, personally, run in your garage. Rather it seems there is a complex socio-economic process leading to the creation of the AIs, and the smarter they are, the more likely it is they were created by a powerful company or a government.

This process itself shapes what the AIs are "aligned" to. Even if we solve some parts of the technical alignment problem we still face the question of what is the sociotechnical process acting as “principal”.

This touche... (read more)

Hmm, seems like someone beat me to it. Federico Faggin describes the idea I had in mind with his Quantum Information Panpsychism theory. Check it out here if interested - and I'll appreciate your opinion on plausiblity of the theory.

https://www.essentiafoundation.org/quantum-fields-are-conscious-says-the-inventor-of-the-microprocessor/seeing/

3dr_s
That sounds interesting! I'll give the paper a read and try to suss out what it means - it seems at least a serious enough effort. Here's the reference for anyone else who doesn't want to go through the intermediate news site: https://arxiv.org/pdf/2012.06580 (also: professor D'Ariano authored this? I used to work in the same department!)

I know this might not be a very satisfying response, but as extraordinary claims require extraordinary arguments, I'm going to need a series of posts to explain - hence the Substack.

Haha, I've noticed you reacted with "I'd bet this is false" - I would be quite willing to present my arguments and contrast them with yours, but ultimately this is philosophical belief and no conclusive evidence can be produced for either side (that we know of). Sorry if my comment was misleading.

2Nathan Helm-Burger
I think it's actually a neuroscience question, and that we will be able to gather data to prove it one way or the other. Consider, for instance, if we had some intervention, maybe some combination of drugs and electromagnetic fields, which could manipulate the physical substrate hypothesized to be relevant for the wave particle collapse interactions. If we shift the brain's perception/interaction/interpretation of the quantum phenomena, and the result is imperceptible to the subject and doesn't show up on any behavioral measurements, then that would be evidence against quantum phenomena being relevant. See further arguments here: https://www.lesswrong.com/posts/uPi2YppTEnzKG3nXD/nathan-helm-burger-s-shortform?commentId=AKEmBeXXnDdmp7zD6

Yes, I see connection with your section about digital people, and it is true that what I propose would make AI more compatible for merging with humans. But from my understanding I don't think "digital" people or consciousness can exist. I strongly disbelieve computational functionalism, and believe that consciousness is inherently linked to quantum particle wavefunction collapse. Therefore, I think that if we can recreate consciousness in machines it will always be bound to specialized non-deterministic hardware. I will be explaining my positions in more detail in my Substack. 

8dr_s
As someone with quite a bit of professional experience working with QM, that sounds a bit of a god of the gaps. We don't even know what collapse means, in practice. All we know about consciousness is that it seems like a classical enough phenomenon to experience only one branch of the wavefunction. No particular reason why there can't be more "you" out there in the Hilbert space equally convinced that their branch is the only one into which everything mysteriously collapsed.

Hmm... I see the connection to 1984. But isn't it useful having the words to spot something that is obviously already happening? (like drug clinical trial data being demonstrably presented or even altered with a certain malicious/monetary intent in mind)

Thank you for starting a discussion about this. I have two things to say:

1) In the post above, the "inftoxic" adjective means very much false, incorrect information. Additionally, it also means the falseness was intentionally "put in" the data or story with an intent to mislead, cause harm, etc. So, in fact, the term is different (and to me personally more useful) than the term "malinformation" (which I likewise find quite unhelpful).

2) Regardless of the usefulness of the terminology I used as an example, do you think that we could use new words in and around information, that could improve the way how we lead the debate in an attempt to be less wrong?

2Jiro
No, it doesn't. You've defined it to include harmful and deceptive information, not (or at least not just) false information. And censors love to claim that true things that their political opponents say are "harmful" and "deceptive" because someone might listen to them and draw a conclusion that favors their political opponents.
2Richard_Kennaway
The pronounceability would be improved with an extra vowel. "Infotoxic", etc.

Thank you for such a detailed write-up! I have to admit that I am teetering on the issue whether to ban or not to ban open-source LLMs and as I a co-founder of an AI-for-drug-design startup I had taken the increased biosecurity risk as probably the single most important consideration. So I think the conversation sparked up by your post is quite valuable.

That said, even if I consider all that you presented, I am still leaning towards banning powerful open-source LLMs, at least until we get much more information and most importantly before we establish other... (read more)