I'm watching this dialogue now, I'm 45 (of 73) minutes in. I'd just like to remark that:
- Eliezer is so nice! Just so patient, and calm, and unmindful of others' (ahem) attempts to rile him.
- Robert Wright seemed more interested in sparking a fiery argument than in productive discussion. And I'm being polite here. Really, he was rather shrill.
Aside: what is the LW policy on commenting on old threads? All good? Frowned upon?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Aside from the problem that higher intelligence doesn't lead necessarily to convergent moral goals, in this context, I'd hope that a superintelligence didn't see it that way. Since the main argument for a difference in moral standing between humans and most animals rests on the difference in cognitive capacity, a superintelligence that took that argument seriously would by the same token be able to put its own preferences above humans and claim a moral highground in the process.
I think it would be difficult to construct an ethical system where you give no consideration to cognitive capacity. Is there a practical reason for said superintelligence to not take into account humans' cognitive capacity? Is there a logical reason for same?
Not to make light of a serious question, but, "Equal rights for bacteria!"? I think not.
Aside: I am puzzled as to the most likely reason Esar's comment was downvoted. Was it perhaps considered insufficiently sophisticated, or implying that its poster was insufficiently well-read, for LW?