This post was rejected for the following reason(s):
Not addressing relevant prior discussion. Your post doesn't address or build upon relevant previous discussion of its topic that much of the LessWrong audience is already familiar with. If you're not sure where to find this discussion, feel free to ask in monthly open threads (general one, one for AI). Another form of this is writing a post arguing against a position, but not being clear about who exactly is being argued against, e.g., not linking to anything prior. Linking to existing posts on LessWrong is a great way to show that you are familiar/responding to prior discussion. If you're curious about a topic, try Search or look at our Concepts page.
Missing some rationality basics. Sometimes it’s hard to judge, but my feeling from your submission it fails to apply some of the basic rationality mental motions that are expected on LessWrong. There’s a fairly long list of these, but they include things like focusing on predictions, defining things clearly or tabooing definitions, expressing uncertainty [quantitatively]. See this general intro to LessWrong.
Ethical systems often rely on axioms that prescribe behavior, such as, “you must maximize happiness,” or, “you must prioritize duty”—the “must,” signifying an unjustified and fundamentally illogical leap. I derive moral behavior without using prescriptive claims, instead uncovering the logical relationship between observable phenomena. I begin with three observable premises, then derive their logical consequences.
These premises form a deterministic ‘law’ (like gravity), where moral behavior emerges necessarily from logic—not because it ‘should,’ but because violating it creates contradictions. Like gravity, this principle operates universally in theory; in practice, friction (e.g., irrational agents) obscures but doesn’t negate it.
The proof relies on two foundational axioms. One, we live in a logical reality. Essentially, the universe operates deterministically; all phenomena (including human behavior) are causally determined. In layman's terms, “there are no uncaused causes.” Two, suffering exists. It is an objective state (we call it “badness”) that agents inherently avoid (e.g., pain, distress). It is not a construct, merely an observable phenomena present in all sentient life.
Centrally important deductions are summarized as follows. One, rational agents (those who are capable of goal-directed reasoning) use autonomy (information and choice) to avoid suffering. An example of this would be the choice to remove your hand from a fire because it hurts. Two, any act that violates the autonomy of an agent cannot be universalized without contradiction. If Agent A harms Agent B Agent A must accept being harmed—but this undermines Agent A’s own suffering-avoidance by creating precedent for autonomy violation to occur. In layman’s terms, this is described as, “an eye for an eye will make the whole world blind.”
This reveals the only stable strategy (one that does not undermine one’s own autonomy) as mutual respect for autonomy. This is the only strategy that doesn’t violate Nash equilibrium. Essentially, as an agent, doing harm invites retaliation while cooperation preserves your own autonomy which is necessarily a sustainable harmony unless cooperation is factually impossible (not enough resources for both people to survive). In layman’s terms, this is just a complicated way to say that the “golden rule” is actually the only correct model under which mutual flourishing can occur.
What precedes this sentence is my argument, what follows will preempt some possible contradictions or “edge cases.”
One, “What about irrational agents, like psychopaths?” They are irrational agents—incapable of universalizing or correcting behavior via reasoning. The framework applies only to agents capable of logic, just as physics applies only to systems obeying physical laws. This framework only applies to rational agents (those corrigible by reasoning). It would be like calling an earthquake immoral—a category error. Two, “Is suffering truly binary? Is it really unquantifiable? What about trade-offs or justified harm?” Suffering’s moral weight is binary because comparisons require subjective arbitration—a violation of universal logic. Without an agreed-upon metric, it is fundamentally illogical. Three, “How is this any different from Kant?” Kant’s ‘duty’ is a prescriptive leap (ignores the illogical “is-ought” gap); this framework is descriptive, revealing morality as the Nash equilibrium of rational agents.
This concludes the bulk of my argument. I will now address the obvious implications. For AI Alignment with humanity, we would program agents to respect autonomy and avoid unintended harm, therefore AI agents which we create will only be able to have created harm where it was unpredictable (given the information it was fed). For laws and political systems, laws that violate autonomy (imprisonment of a rational agent) are logically incoherent and fundamentally destroy the ideals which they seek to enforce (harmony.) Forced harmony is not harmony at all. For everyday ethics, rhetoric, logic, and art, it defines “love” as a mutual recognition of the value of autonomy. This necessarily means that tragedy is simply the failure to recognize the logical importance of autonomy while concepts aligned with “love” are the result of actions that demonstrate a full understanding of the value of respecting autonomy. For procreation, it imposes unavoidable autonomy violation. This isn’t immoral—it’s a predictable misalignment between the creator’s act and the created’s inherent suffering-avoidance drive. For suicide, in a logic-aligned society (where autonomy is universally respected), suicide is logically incoherent: harmony precludes the need for terminal suffering-avoidance. However, in a logic-deficient society (like ours), suicide can align with autonomy if no other suffering-avoidance option exists. Crucially, this framework imposes no obligation to live—it only describes the conditions under which suicide contradicts or aligns with universalized autonomy.
Disclaimer: This framework focuses on rational agents; extending it to non-rational agents (e.g., infants) or absolute scarcity requires further work. The core deduction, however, stands.