In that post, you seem to be making the opposite case: That you should not reject X (animal testing) simply because your argument could be used to support repugnant proposal Y (unwilling human testing); you say that the indirect consequences of Y would be very bad (as they obviously would) but then you don't make the argument that one must then reject X, instead that you should support X but reject Y for unrelated reasons, and you are not required to disregard argument Q that supports both X and Y and thereby reject X (assuming X was in fact utility increasing).
Or, that the fact that a given argument can be used to support a repugnant conclusion (sexism or racism) should not be a justification for not using an argument. In addition, the argument for brain complexity scaling moral value that you now accept as an edit is obviously usable to support sexism and racism, in exactly the same way that you are using as a counterargument:
For any given characteristic, different people will have different amounts of that characteristic, and for any two groups (male / female, black / white, young / old, whatever) there will be a statistical difference in that measurement (because this isn't physics and equality has probability epsilon, however small the difference) so if you tie any continuous measurement to your moral value of things, or any measurement that could ever not fully apply to anything human, you're racist and sexist.
No no no. I'm not saying "Since sexism and racism are wrong," - I'm saying that those who don't want their arguments to be of the sort that it could analogously justify racism or sexism (even if the person is neither of those), then they would also need to reject speciesism.
Mindkilling-related issues aside, I am going to do my best to un-mindkill at least one aspect of this question, which is why the frame change.
Is this similar to arguing that if the bloody knife was the subject of an illegal search, which we can't allow because allowing that would lead to other bad things, and therefore is not admissible in trial, then you must not only find the defendant not guilty but actually believe that the defendant did not commit the crime and should be welcome back to polite society?
One, do you believe that those five links also take a similarly mindkilling form and that mindkilling is justified because it is standard practice in ethics? If this is true, does the fact that it is standard practice justify it, and if so what determines what is and isn't justified by an appeal to standard practice?
Refuting counter-argument X by saying that if X was your full set of ethical principles you would reach repugnant conclusion Y is at its strongest an argument that X is not a complete and fully satisfactory set of ethical principles. I fail to see how it can be a strong argument that X is invalid as a subset of ethical principles, which is how it appears to have been used above.
In addition, when we use an argument of the form "X leads to some conclusion Y for which Y can be considered a subset of Z, and all Z are bad" we imply that for all such Z, you can (even in theory) create an internally consistent ethical system, even in theory, where for any given principle set P such that P is under some circumstance leading to an action in some such set Z, P is wrong. I would claim that if you include all your examples of such Z, it is fairly easy to construct situations such that the sets Z contain all possible actions and thus all ethical systems P, which would imply no such ethical systems can exist, and if you well-define all your terms, I would be happy to attempt to construct such a scenario.
Many arguments here seem to take the mindkilling form of "If we had to derive our entire system of moral value based on explicitly stated arguments, and follow those arguments ad absurdum, bad thing results."
Since bad thing is bad, and you say it is in some situation justified, clearly you are wrong, with the (reasonably explicit) accusation that if you use this line of reasoning you are (sexist! racist! in favor of killing babies! in favor of genocide! or worse, not being properly rational!)
You pig?
Speciesist language, not cool!
Haha! Anyway, I agree that it promotes mindkilled attitude (I'm often reading terrible arguments by animal rights people), but on the other hand, for those who agree with the arguments, it is a good way to raise awareness. And the parallels to racism or sexism are valid, I think.
Haha only serious. My brain reacts with terror to that reply, with good reason: It has been trained to. You're implicitly threatening those who make counter-arguments with charges of every ism in the book. The number of things I've had to erase because one "can't" say them without at least ending any productive debate, is large.
I strongly object to the term "speciesism" for this position. I think it promotes a mindkilled attitude to this subject ("Oh, you don't want to be speciesist, do you? Are you also a sexist? You pig?").
It's not only the term. The post explicitly uses that exact argument: Since sexism and racism are wrong, and any theoretical argument that disagrees with me can be used to argue for sexism or racism, if you disagree with me you are a sexist, which is QED both because of course you aren't sexist/racist and because regardless, even if you are, you certainly can't say such a thing on a public forum!
It's not obvious to me that anyone at the big companies actually wants the situation where everyone patents everything and there is a perpetual cold war. The big companies want a thriving economy, especially in their sector, and if it was cheaper to get together in an alliance to block all the stupid patents, while the real ones (at least that were backed by big companies) got through, it's possible everyone would be better off.
Even with zero knowledge of the other guy's function, you'd always start with Lie #1: Always represent any outcome that leaves you worse off as having infinite negative utility (or at least more bad than your utopia point is good).
This cuts off any outcome that decreases your utility, and thus is very, very good for you - even if you need to self-modify and make it real. Note that this is how actual negotiations work.
Another easy hack is to limit your goals, and pretend that impossibly good outcomes are no better for you than the best possible outcome, in order to increase the value of utility to you via decreasing your Utopia point.
If both players lie in this way, the standard outcome is the default point
Does that mean choosing whether to lie or not is like a prisoner's dilemma?
Yes, and this parallels real negotiation. If the two sides sufficiently trust each other, you show each other your term sheets (basically utility functions), find the Pareto optimum set of solutions and then pick a point on that line.
For sufficiently reliably repeated interactions that don't vary too much in size, with agents with whom trust is in context total, from what I can tell optimum real world behavior is isomorphic to MWBS with a meta-negotiation over the ratio. As these conditions get closer to yours, your behavior should approach MWBS.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Firstly, this post is awesome.
Secondly though, this post brushes on the topic of intuition as a useful tool, something I think far too many Logic-Based types throw out without considering the practicality of. It's better not to think of it as being an substitute for logical thinking, but rather as a quick and dirty backup, for when you don't have all the information.
Intuition can occur in up to two seconds, operates almost completely below conscious awareness, and begins effecting your body immediately. Here are some excerpts from Blink, a book by Malcolm Gladwell, in which he researches how intuition works, what abilities and drawbacks it has, and what biases can effect it's overall usefulness.
Ah, a perfect opportunity to be a Logical Thinker, using careful observation and reasoning to find the ideal pattern. What path does intuition take though?
This is all standard enough, but what is more impressive is the fact that people started generating stress responses to the red decks by the tenth card.
That's right, palms began to sweat in reaction to the red decks almost immediately, naturally pushing people towards the blue decks before they could even understand why, or even recognize what they were doing.
There are better examples of applied Intuition in Blink, but I've purposefully only used one of the earlier examples in the Amazon Sampler to respect the book. I'd recommend reading the whole thing though, especially if you're interested in understanding what it does while you're thinking things through.
How does all of this interact with the fact that almost everyone will continue to take some number of cards from all decks the entire time, rather than going for exploration early and then exploitation late?