Larifaringer

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

Even if current LLM architectures cannot be upscaled or incrementally improved to achieve human-level intelligence, it is still possible that one or more additional breakthroughs will happen in the next few years that allow an explicit thought assessor module. Just like the transformer architecture surpsied everybody with its efficacy. So much money and human resources are being thrown at the pursuit of AGI nowadays, we cannot be confident that it will take 10 years or longer.

It seems like there’s a conceptual leap from ‘pain is intrinsically bad for wild animals’ to ‘wild animal suffering is a problem that humans should address.’ I don’t see a clear argument for why we, as humans, are morally implicated. "Pain is bad for wild animals" doesn't imply "the pain of wild animals is bad for humans".