Comment author: Ambition 13 August 2013 09:25:03AM *  8 points [-]

Firstly, this post is awesome.

Secondly though, this post brushes on the topic of intuition as a useful tool, something I think far too many Logic-Based types throw out without considering the practicality of. It's better not to think of it as being an substitute for logical thinking, but rather as a quick and dirty backup, for when you don't have all the information.

Intuition can occur in up to two seconds, operates almost completely below conscious awareness, and begins effecting your body immediately. Here are some excerpts from Blink, a book by Malcolm Gladwell, in which he researches how intuition works, what abilities and drawbacks it has, and what biases can effect it's overall usefulness.

In front of you are four decks of cards, two of them red and the other two blue. Each card in those four decks either wins you a sum of money or costs you some money, and your job is to turn over cards from any the decks, one at a time, in such a way that maximizes your winnings.*

Ah, a perfect opportunity to be a Logical Thinker, using careful observation and reasoning to find the ideal pattern. What path does intuition take though?

What you don't know at the beginning however, is that the red decks are a minefield. The rewards are high, but when you lose on the red cards, you lose a lot. Actually, you can win only by taking cards from the blue decks, which offer a nice steady diet of $50 payouts and modest penalties. The question is how long will it take you to figure this out? After about fifty cards or so, people start to develop a hunch about what's going on. We don't know why we prefer the blue decks, but we're pretty sure at that point that they are a better bet. After turning about eighty cards, most of us have figured out the game and can explain exactly why the first two decks are a bad idea.

This is all standard enough, but what is more impressive is the fact that people started generating stress responses to the red decks by the tenth card.

That's right, palms began to sweat in reaction to the red decks almost immediately, naturally pushing people towards the blue decks before they could even understand why, or even recognize what they were doing.

In those moments, our brain uses two very different strategies to make sense of the situation. The first is the one we're most familiar with. It's the conscious strategy. We think about what we've learned, and eventually we come up with an answer. This strategy is logical and definitive. But it takes us eighty cards to get there. It's slow, and it needs a lot of information. There's a second strategy, though. It operates a lot more quickly. It starts to kick in after ten cards, and it's really smart, because it picks up the problem with the red decks almost immediately.

There are better examples of applied Intuition in Blink, but I've purposefully only used one of the earlier examples in the Amazon Sampler to respect the book. I'd recommend reading the whole thing though, especially if you're interested in understanding what it does while you're thinking things through.

Comment author: Zvi 14 August 2013 10:32:51PM 2 points [-]

How does all of this interact with the fact that almost everyone will continue to take some number of cards from all decks the entire time, rather than going for exploration early and then exploitation late?

Comment author: Lukas_Gloor 29 July 2013 02:56:47PM *  2 points [-]

No, what makes the difference is that you'd be mixing up the normative level with the empirical one, as I explained here (parent of the linked post also relevant).

Comment author: Zvi 29 July 2013 03:22:37PM 1 point [-]

In that post, you seem to be making the opposite case: That you should not reject X (animal testing) simply because your argument could be used to support repugnant proposal Y (unwilling human testing); you say that the indirect consequences of Y would be very bad (as they obviously would) but then you don't make the argument that one must then reject X, instead that you should support X but reject Y for unrelated reasons, and you are not required to disregard argument Q that supports both X and Y and thereby reject X (assuming X was in fact utility increasing).

Or, that the fact that a given argument can be used to support a repugnant conclusion (sexism or racism) should not be a justification for not using an argument. In addition, the argument for brain complexity scaling moral value that you now accept as an edit is obviously usable to support sexism and racism, in exactly the same way that you are using as a counterargument:

For any given characteristic, different people will have different amounts of that characteristic, and for any two groups (male / female, black / white, young / old, whatever) there will be a statistical difference in that measurement (because this isn't physics and equality has probability epsilon, however small the difference) so if you tie any continuous measurement to your moral value of things, or any measurement that could ever not fully apply to anything human, you're racist and sexist.

Comment author: Lukas_Gloor 29 July 2013 02:03:59PM 5 points [-]

No no no. I'm not saying "Since sexism and racism are wrong," - I'm saying that those who don't want their arguments to be of the sort that it could analogously justify racism or sexism (even if the person is neither of those), then they would also need to reject speciesism.

Comment author: Zvi 29 July 2013 02:32:58PM 1 point [-]

Mindkilling-related issues aside, I am going to do my best to un-mindkill at least one aspect of this question, which is why the frame change.

Is this similar to arguing that if the bloody knife was the subject of an illegal search, which we can't allow because allowing that would lead to other bad things, and therefore is not admissible in trial, then you must not only find the defendant not guilty but actually believe that the defendant did not commit the crime and should be welcome back to polite society?

Comment author: Lukas_Gloor 29 July 2013 01:38:11PM *  3 points [-]

That's common practice in ethics.

You need something to work with otherwise ethical reasoning couldn't get off the ground. But it doesn't necessarily imply that people are not being properly rational (irrational would have to be defined according to a goal, and ethics is about goals.)

Comment author: Zvi 29 July 2013 02:17:16PM 1 point [-]

One, do you believe that those five links also take a similarly mindkilling form and that mindkilling is justified because it is standard practice in ethics? If this is true, does the fact that it is standard practice justify it, and if so what determines what is and isn't justified by an appeal to standard practice?

Refuting counter-argument X by saying that if X was your full set of ethical principles you would reach repugnant conclusion Y is at its strongest an argument that X is not a complete and fully satisfactory set of ethical principles. I fail to see how it can be a strong argument that X is invalid as a subset of ethical principles, which is how it appears to have been used above.

In addition, when we use an argument of the form "X leads to some conclusion Y for which Y can be considered a subset of Z, and all Z are bad" we imply that for all such Z, you can (even in theory) create an internally consistent ethical system, even in theory, where for any given principle set P such that P is under some circumstance leading to an action in some such set Z, P is wrong. I would claim that if you include all your examples of such Z, it is fairly easy to construct situations such that the sets Z contain all possible actions and thus all ethical systems P, which would imply no such ethical systems can exist, and if you well-define all your terms, I would be happy to attempt to construct such a scenario.

Comment author: Zvi 29 July 2013 01:20:15PM 0 points [-]

Many arguments here seem to take the mindkilling form of "If we had to derive our entire system of moral value based on explicitly stated arguments, and follow those arguments ad absurdum, bad thing results."

Since bad thing is bad, and you say it is in some situation justified, clearly you are wrong, with the (reasonably explicit) accusation that if you use this line of reasoning you are (sexist! racist! in favor of killing babies! in favor of genocide! or worse, not being properly rational!)

Comment author: Lukas_Gloor 29 July 2013 12:07:33AM 12 points [-]

You pig?

Speciesist language, not cool!

Haha! Anyway, I agree that it promotes mindkilled attitude (I'm often reading terrible arguments by animal rights people), but on the other hand, for those who agree with the arguments, it is a good way to raise awareness. And the parallels to racism or sexism are valid, I think.

Comment author: Zvi 29 July 2013 01:07:49PM 11 points [-]

Haha only serious. My brain reacts with terror to that reply, with good reason: It has been trained to. You're implicitly threatening those who make counter-arguments with charges of every ism in the book. The number of things I've had to erase because one "can't" say them without at least ending any productive debate, is large.

Comment author: Qiaochu_Yuan 29 July 2013 12:01:06AM *  13 points [-]

I strongly object to the term "speciesism" for this position. I think it promotes a mindkilled attitude to this subject ("Oh, you don't want to be speciesist, do you? Are you also a sexist? You pig?").

Comment author: Zvi 29 July 2013 12:59:42PM 6 points [-]

It's not only the term. The post explicitly uses that exact argument: Since sexism and racism are wrong, and any theoretical argument that disagrees with me can be used to argue for sexism or racism, if you disagree with me you are a sexist, which is QED both because of course you aren't sexist/racist and because regardless, even if you are, you certainly can't say such a thing on a public forum!

Comment author: Zvi 23 July 2013 04:05:58PM 1 point [-]

It's not obvious to me that anyone at the big companies actually wants the situation where everyone patents everything and there is a perpetual cold war. The big companies want a thriving economy, especially in their sector, and if it was cheaper to get together in an alliance to block all the stupid patents, while the real ones (at least that were backed by big companies) got through, it's possible everyone would be better off.

Comment author: Zvi 21 July 2013 11:44:37AM 2 points [-]

Even with zero knowledge of the other guy's function, you'd always start with Lie #1: Always represent any outcome that leaves you worse off as having infinite negative utility (or at least more bad than your utopia point is good).

This cuts off any outcome that decreases your utility, and thus is very, very good for you - even if you need to self-modify and make it real. Note that this is how actual negotiations work.

Another easy hack is to limit your goals, and pretend that impossibly good outcomes are no better for you than the best possible outcome, in order to increase the value of utility to you via decreasing your Utopia point.

Comment author: Oscar_Cunningham 20 July 2013 08:13:53AM 7 points [-]

If both players lie in this way, the standard outcome is the default point

Does that mean choosing whether to lie or not is like a prisoner's dilemma?

Comment author: Zvi 21 July 2013 11:31:08AM 3 points [-]

Yes, and this parallels real negotiation. If the two sides sufficiently trust each other, you show each other your term sheets (basically utility functions), find the Pareto optimum set of solutions and then pick a point on that line.

For sufficiently reliably repeated interactions that don't vary too much in size, with agents with whom trust is in context total, from what I can tell optimum real world behavior is isomorphic to MWBS with a meta-negotiation over the ratio. As these conditions get closer to yours, your behavior should approach MWBS.

View more: Prev | Next