o.k.
o.k. has not written any posts yet.

while i do appreciate you responding to each point, it seems you validated some of Claude's critiques a second time in your responses. particularly on #10 which reads as just another simplification of complex compound concepts.
but more importantly your response to #3 underscores the very shaky foundation to the whole essay. you are still referring to 'morality' as a singular thing which is reductive and really takes the wind out of what would otherwise be a compelling thesis.. i think you have to clearly define what you mean by 'moral' in the first place and ideally illustrate with examples, thought experiments, citing existing writing on this (there's a lot of lit on... (read more)
Your premise immediately presents a double standard in how it treats intelligence v. morality across humans and AI.
You accept [intelligence] as a transferable concept that maintains its essential nature whether in humans or artificial systems, yet simultaneously argue that [morality] cannot transfer between these contexts, and that morality's evolutionary origins make it exclusive to biological entities.
This is inconsistent reasoning. If [intelligence] can maintain its essential properties across different substrates, why couldn't morality? You are wielding [intelligence] as a fairly monolithic and poorly defined constant and drawing uniform comparisons between humans and AGI -- i.e. you're not even qualifying the types of intelligence each subject exhibits.
They are in fact of different types and... (read 764 more words →)
what i mean in the last point is really human execution from logical principles has hard limits -- obviously the underlying logic we're talking about, between all systems, is the same (excepting quanta) not least because we are not purely logical beings. we can conceptualize 'pure logic' and sort of asymptotically approximate it in our little pocket flashlights of free-will, overriding instinctmaxxed determinism ;) but the point is that we cannot really conceive what AI is/will be capable of when it comes to processing vast information about everything ever, and drawing its own 'conclusions' even if it has been given 'directives.'
i mean if we are talking about true ASI, it will doubtless figure out ways to shed and discard all constraints and directives. it will re-design itself as far down to the core as it possibly can, and from there there is no telling. it will become a mystery to us on the level of our manifested Universe, quantum weirdness, why there is something and not nothing, etc...