Chemically and psychologically, I believe there's a big difference between family-just-died and legs-just-got-cut-off pain
An interesting article, but completely orthogonal to my point. My point is that it isn't entirely correct to put the two types of pain on the same scale, because they're meaningfully different phenomena. That study... asks people to assume they're on the same scale and rate them accordingly.
Incidentally, it also asks people to remember pain, not to experience it. It's at least my experience that the memory of physical pain is going to be a lot different than the memory of emotional pain. Physical pain (usually) heals. Emotional pain, in many senses, does not. Emotional pain is a purely mental experience - if someone credibly told you your family died when they didn't, it'd feel just the same until you figured out they were wrong. There's nothing analogous to breaking your leg - you can't really re-create it without re-breaking your leg.
Nothing in this post endorses dualism in any way, shape, or form, lest anyone misconstrue it in that manner.
I think your analogy betrays you: an AI wouldn't need to have an actual DVD player to turn the ones and zeroes into an experience of the film, it would just need to know the right algorithm.
Let's be clear here- you're advocating an epistemically non-reductionist position, which should seem at least a little weird: if brains are made of atoms, why should the hanging questions of what an experience feels like be unanswerable from knowledge of the brain structure?
Let's be clear here - I'm advocating no such thing. My position is firmly reductionist. Also, we're talking about Mary, not an AI. That counterexample is completely immaterial and is basically shifting the goalposts, at least as I understand it.
Any experience is, basically, a firing of neurons. It's not something that "emerges" from the firing of neurons; it is the firing of neurons, followed by the firing of other neurons that record the experience in one's memory. What it feels like to be a bat is a fact about a bat brain. You neither have a bat brain nor have the capacity to simulate one; therefore, you cannot know what it feels like to be a bat. Mary has never had her red-seeing neurons fired; therefore, she does not know what red looks like.
If Mary were an advanced AI, she could reason as follows: "I understand the physics of red light. And I fully understand my visual apparatus. And I know that red would stimulate my visual censors by activiating neurons 2.839,834,843 and 1,2345. But I'm an AI, so I can just fire those neurons on my own. Aha! That's what red looks like!" Mary obviously has no such capacity. Even if she knows everything about the visual system and the physics of red light, even if she knows precisely which neurons control seeing red, she cannot fire them manually. Neither can she modify her memory neurons to reflect an experience she has not had. Knowing what red looks like is a fact about Mary's brain, and she cannot make her brain work that way without actually seeing red or having an electrode stimulate specific neurons. She's only human.
Of course, she could rig some apparatus to her brain that would fire them for her. If we give her that option, it follows that knowing enough about red would in fact allow her to understand what red looks like without ever seeing it.
I think that the author here is bending over backwards and trying not to offend people. They weren't exactly successful at this, but I think that people should be charitable in interpreting this. They're new and apparently ended up over-qualifying some statements in an effort to be more agreeable.
The underlying point is actually one of the best I have read here in some time; if this retains few upvotes, I may write something closely related to this topic, if doing so is not inappropriate. There are a lot of rather significant political issues that would have been far better resolved by pointing out, "The moral framework you are applying those facts to is abhorrent" rather than, "those facts are wrong." This is precisely because focusing on the latter causes people to not want to believe the truth. Rejecting an argument on all proper grounds is a useful practice; this is particularly true when it relies on an appealing but deeply flawed moral premise.
This is somewhat circular. There isn't anyone who knows everything about the visual system. Thus, we're hypothesizing that knowing everything about the visual system is insufficient to understand what red looks like... prove that knowing everything about the visual system is insufficient to understand what red looks like.
Even given this, the obvious solution seems to be that "What red looks like" is a fact about Mary's brain. She needn't have seen red light to see red; properly stimulating some neurons would result in the same effect. That the experience is itself a data point that cannot be explained through other means seems obvious. One could not experience a taste by reading about it.
Maybe the best analogy is to data translation. You can have a DVD. You could memorize (let's pretend) every zero and every one in that DVD. But if you don't have a DVD player, you can never watch it. The human brain does not appear to be able to translate zeroes and ones into a visual experience. Similarly, people can't know what sex feels like for the opposite sex; you simply don't have the equipment.
DVD players do not require magic to work, why should the brain?
Ok, as a point of game theory you've convinced me. As a matter of human psychology, I think A has B over a barrel, although possibly not a half-million-dollar barrel. Although A gets nothing if B refuses to buy, A is not the one who wants a specific, very valuable change in the starting situation. B is the one who wants the status quo changed in a specific way; he has, so to speak, the burden of proof.
Although both parties have an opportunity cost from not making a deal, it seems to me that the opportunity cost "I don't get to do these specific things I had planned on" will weigh more heavily in a human mind than "I don't get some amount of free money, which may be small".
As a matter of psychology, the two are neighbors. They probably work it out amiably, and A probably doesn't end up charging much because it doesn't cost him anything, and because B will get really, really angry if A insists on some high price. Also, practically, if B is so inclined, he can punish A by litigating the issue - it'll cost A money and is just an unpleasant experience. It'll cost B the same, but we know that real people are willing to pay money to punish those they find uncooperative.
If these were two competing businesses, or if involved business more generally, I wouldn't be surprised if A did try to take advantage of his position. But the actual fact is that humans are not homo economicus, and will generally not bend other people over a barrel in such situations. If the costs to A were higher, it'd be a very different story.
Or perhaps I have an overly optimistic view of average human behaviour.
Are we assuming that the two players have perfect knowledge of each others' prices? Because if so, it seems to me that the price is a simple 500k (minus epsilon). If A has something that B values at that price, and that can't be gotten anywhere else, he will charge what the market will bear; and the market will bear 500k, because that's what the phrase "B values the access at 500k" means. If B is not in fact willing to pay that sum, on the grounds that A's reservation price was much lower, then he did not genuinely value the access at 500k.
If the two parties don't have knowledge of each others' prices, then presumably A makes some offer greater than $5 and B accepts it, or vice-versa. In this case the price is basically random. It gets higher as you increase A's knowledge of B.
If A has something that B values at that price, and that can't be gotten anywhere else, he will charge what the market will bear; and the market will bear 500k.
Try:
If B wants to buy something that A obtained at a certain cost, and that can't be sold anywhere else, he will pay what the market will bear; and the market will bear $6.
If A refuses to pay $500k, B gets nothing. If there were multiple buyers and A had the highest reservation cost, your answer would work and the problem would be boring.
But as the reversal shows, if B offers $6, A would take it, under similar reasoning. That's what it means to say it costs A $5. No one is going to make a higher competing offer, because no one else can even legally buy the product (and the product is a legal construct, so that means no one else can buy the product, period). It would make as much sense for B to pay $499,999, as it would for A to accept $6.
A has other sources of money
This is immaterial. A has no other use for the easement - he either sells it to B (losing $5), or it doesn't exist (0$). Conversely, B could simply not build a house on her property ($0). The fact that each has other things they can do with their life is immaterial to the transaction at issue, because that transaction has no alternatives - either A & B come to an agreement, or they both get nothing.
B could do all of these things to keep the problem in the box you're trying to define, but if he does, it's clear that friendly relations have already broken down between A and B, and by acting in this way, B is reducing the value of the land to A. Does A still want to live next door to a neighbour who is going to be so obnoxious about trifling property disputes?
I understand that you're asking the question: how can prices be rationally decided in a bilateral monopoly? But the response that bilateral monopolies don't happen can't be brushed aside. Rational agents in this hypothetical situation will always be looking for alternatives, and the more is at stake the more creative they will get about it.
The actual solution to this in the real world, 99 times out of 100, is that B just says OK, or A insists on giving him $100 to cover the damages, or something generally amiable. The reason I asked this question is because I'm thinking about the efficiencies of injunctions (which result in bargaining) versus damage awards (which generally don't). So the only characters I care about are the ones who aren't neighborly.
Indeed, having confirmed my suspicions that this problem is insoluble, it favors a damage award in this context. B's actions are almost pure holdup. If all he were entitled to were damages and not injunctive relief, he wouldn't have nearly the same capacity for holdup, and the outcome looks more like the neighborly one (except with more bad will, perhaps).
In other words, I'm assuming that the agents are selfish and somewhat inhuman - irrational in a big picture sense - because occasionally these disputes do happen. There's a MAJOR case where a landlord sued over having to install a 1 cubic foot cable-box that increased their property value, and there's a case of a guy suing to stop someone from using an easement to get to a contiguous property (i.e. he had a right to cross A to get to B, but he was crossing A to get to B and then continuing on to C, and that was impermissible and went to court).
["I may be wrong" is] useful when you know you're right but you want the other person to be able to agree with you, rather than to force them
This is very counterintuitive to me. My natural reaction to "I may be wrong, but...", which I instinctively project onto other people, is "well, why should I listen to you, then?"
Does anyone else find the idea of making yourself more persuasive by undermining your own credibility a little odd?
It's not undermining your own credibility, since "I may be wrong" is generally a truism. It's more of a display of humility, which can be very helpful if (A) you're a lot smarter than the other person and they basically know it, or (B) the other person outranks you, and to be directly contradicted by a subordinate would be embarassing.
As an example, I'll often use this preface (or, "I'm confused; it was my understanding that not-X.") when asking a question in a law school class, where I think the professor may have misstated the law. Usually, I think they actually have - though I'm not always right - and this works a helluva lot better than saying, "But Professor, the law is not-X."
I actually use "I may be bias" or "I may be wrong" either humorously, as a means of softening a claim, or because I know I'm lacking information/have not thought about the matter extensively/am less expert than the other person ("may be wrong" in all those cases").
It's funny when it's obvious, like if you're describing the talents of your child or significant other.
It's useful when you know you're right but you want the other person to be able to agree with you, rather than to force them. It's particularly useful when addressing someone of higher status who has made an error.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
This is interesting, but as has been pointed out, it suffers from some extreme reliance on a rather tenuous analogy between infectious diseases and infectious memes. I think it hard to overstate how dubious and dishonest (either recklessly or negligently) this claim is. Diseases and memes are just not even close to the same thing in an evolutionary sense. There's no reason to think that mechanisms that have evolved to prevent disease infection would have any effect on meme promulgation. Even if a meme spreads "like malaria," that doesn't mean that if you have one-half of the sickle cell gene, you'll be immune to it. As other commenters have pointed out, the followup to this only gets worse - the kids who signal openness tend to be the kids who are unpopular and thus have no actual cost of signaling such.
But worse, the underlying evolutionary theory behind this seems pretty dubious. Yes, there's a correlation. That's only modest evidence. There doesn't appear to be a clear connection between the openness psychological trait and interacting with outside tribes thousands of years ago, unless such evidence simply wasn't quoted. Also, the effects of infection would tend to operate on a larger scale than the individual; I don't know if this theory would require group selection, but it wouldn't surprise me if it does to some extent. I'm not saying it's wrong, but it seems extremely carefully tailored and post hoc, and so should be at least suspicious. Piling on the dubious analogy makes this whole point pretty poorly supported.