E.g., the snake wouldn't have human-like reward circuitry, so it would probably learn to value very different things than a human which went through the same experiences.
So in this case I think we then agree. But it seems a bit at odds with the 4% weighting of genetic roots. If we agree the snake would exhibit very different values despite experiencing the 'human learning' part then shouldn't this adjust the 60% weight you grant that? Seems the evolutionary roots made all the difference for the snake. Which is the whole point about initial AGI alignm...
I would consider that you cannot weight these things along a single metric. Say evolution -> human values really is only 4% of your value alignment, if that 4% is the fundamental core then it's not part of the sum of all values, but a coefficient or a base where the other stuff is the exponent. It's the hardware the software has to be loaded on, but not totally tabula rasa either.
Correct me if I'm wrong, but this would assume that if you could somehow make a human level intelligence snake and raise it in human society (let's pretend nobody considers it ...
I think "The Bottom Line" here is meant to link to the essay.