I understand the type of criticism generally, but could you say more about this specific case?
I'm curious if the objection stems from some mismatch of abstraction layers, or just the habit of not speaking about certain topics in certain terms.
I understand the type of criticism generally, but could you say more about this specific case?
I'm curious if the objection stems from some mismatch of abstraction layers, or just the habit of not speaking about certain topics in certain terms.
This all seems to be about the "qualia" problem. Take another example. How would you know if an alien was having the experience of seeing the color red? Well, you could show it red and see what changes. You could infer it from its behavior (for example if you trained it that red means food - if indeed the alien eats food).
Similarly you could tell that it's suffering when it does something to avoid an ongoing situation, and if later on it would very much prefer not to go under the same conditions ever again.
I don't think there is anything special about the actual mechanism and neural pattern that expresses pain or suffering in our brains. It's that pattern's relation to memories, sensory inputs and motor outputs that's important.
Probably you could even retrain the brain to consider a certain fixed brain stimulus to be pleasure even though it was previously associated with pain. It's like putting on those corrective glasses that turn the visual input by 180° and the brain can adapt to that situation and the person is feeling normal after some time.
There's also a linguistic issue here. The English "and" doesn't simply mean mathematical set theoretical conjunction in everyday speech. Indeed, without using words like "given" or "suppose" or a long phrase such as "if we already know that", we can't easily linguistically differentiate between P(Y | X) and P(Y, X).
"How likely is it that X happens and then Y happens?", "How likely is it that Y happens after X happened?", "How likely is it that event Y would follow event X?". All these are ambiguous in everyday speech. We aren't sure whether X has hypothetically already been observed or it's a free variable, too.
In both cases, the tradeoff is the same - drive fifteen minutes to save twenty bucks - but people were much more willing to do it for the cheap item, because $20 was a higher percentage of its total cost. With the $2000 TV, the $20 vanishes into the total cost like a drop in the ocean and seems insignificant.
Evaluating cost savings as a percentage actually makes a certain amount of sense when evaluating policies rather than acts. Cheaper purchases tend to be much more frequent: you probably buy many more shirts than you do big-screen TVs, so expending the effort to find the cheapest source of shirts and evaluate whether it's worthwhile to go out of your way to buy them will save you several times $20 over the lifetime of the policy, whereas the TV is effectively a one-time decision which will only save you $20 total. True, the 15 minute drive time is per-purchase rather than per-policy, but 1) the cost is not just the drive time, but also the effort to research options and the cognitive load of picking and option, which are one-time costs, and 2) a general policy of thriftiness for small, frequent purchases can have a substantial effect on your overall financial situation, but indulging in overpayment for convenience on the odd big one-time purchase is an affordable luxury.
On a different note, another factor to take into account when evaluating commuting times is the possibility of changing jobs. When I bought my house, I specifically looked for a short commute time, but not just to my then-current workplace. I also took into account commute times to other places I might end up working if I changed jobs (other campuses of companies in the area which employ large numbers of people in my field, especially places which employ friends of mine who could refer my for positions). By over-optimizing for my then-current job, I felt I would have increased my risk exposure if I lost my job or became unhappy with it, as well as reducing my ability to take advantage of new opportunities if another employer could make more productive use of me and cut me in on the additional value created.
One mistake I did make in buying a house was very badly underestimating the cost in time, effort, and cash to make repairs and improvements to a house purchased in poor condition. In hindsight, I think I made the right tradeoffs, in that after spending the money I wound up with a house that will suit my needs better and for a longer period of time than I could have afforded by paying the additional cost to buy a house that was already in good condition (this includes the substantial benefit of being able to customize aspects of the house to my desires as I made repairs and improvements), but this was a happy accident despite the major misevaluations I made when planning the purchase.
Or maybe it's just outrageous to ask for $40 when it's clearly possible to sell it for $20. So you kind of punish the shop that asks for $40 because you see them as dishonest and morally repulsive. Sometimes you also have to pay attention to what behavior you encourage with your actions. Not only the immediate dollar value.
Why don't Christmas tree sellers sell the last, leftover Christmas trees for much cheaper, right before Christmas? Because then lots of people would just wait until that time and then buy it cheap. If buyers know that the seller will rather throw out the goods to the thrash rather than sell them cheaper then they will just casually buy the tree knowing that the price is stable and it's all fair. Short-sighted optimization would tell the seller to just sell the leftovers cheaper rather than throw them away.
Similarly, you may want to "send a message" to the $40 shop that you will rather drive a lot than participate in such an outrageous deal.
I'm ashamed to admit this, but I haven't worked with programming on a deep enough level to be comfortable with your analogy. The one thing I think it's missing (and I haven't done a very good job explaining this) is the process of learning/introspection and the distinction between "adding to a body of propositional knowledge" and "triggering the 'learning' subroutine in the mind", which causes the central confusion.
Propositional knowledge and introspection may be analogous to running a virtual machine in user-space, in which you can instantiate the redness object. But that's not a redness object in the real (non-virtual) program. The "real" running program only has user-space objects that are required for the execution of the virtual machine (virtual registers, command objects, etc).
Desiring a mysterious explantion is like wanting to be in a room with no people inside. Once you explain it it's not mysterious any more. The property depends on your actions: emptyness is destroyed by you entering, mysticism is destroyed by you explaining it. Just an alternative to the map-territory way of putting it
I guess you can't want to want stuff. When you genuinely want something (not prestige but an actual goal) you'll easily be in the "flow experience" and lose track of time and actually progress toward the goal without having to force yourself. Actually you have to force yourself to stop in order to sleep and eat because you'd just do this thing all day if you could! Find the thing where you slip into flow easily and do the most efficient thing that's at the same time quite similar to this activity.
I don't see a flat pixel grid when I walk around, either; I see a 3D scene (generally only where I'm currently looking; I mean, I can recall where things are when I'm not looking at them, but they're not in my current visual model, that memory has to be stored elsewhere).
And yet, a lot of optical illusions work for me; because (as in the case of the illusion in this article) the drawing is close enough to what the reality looks like to fool my "scene reconstruction" module in my brain, and I reconstuct the relevant 3D scene when I look at it. Some optical illusions (such as this one ) work by being able to fool my scene reconstruction module in two different ways...
Somewhat related: I think we do have a 3D map of the environment even for things that we aren't looking at at the moment. For example I feel as if I had a device in my brain that keeps track of which people are in which parts of the house right now (or where some emotionally-loaded objects are). I don't have to exert conscious effort specifically for this.
Another thing: it's interesting to think about why we can see dots and lines and shapes at all. By this I mean, why do these low-level things reach our conscious awareness? You aren't consciously aware of your blood sugar level or hormone levels. You do feel a sort of aggregated well-being feeling consciously but the details don't reach the conscious level. It's a strange and bizarre thing to think about what vision could be like if our consciousness didn't have access to dots and shapes and colors style low-level image data and we only "felt" the gist of it, for example by only feeling our current 3D model in some way. (It could be similar to blindsight.)
One answer could be that our vision is so complicated that the unconscious parts just can't cope with it fully, they can't analyze it sufficiently, and conscious processes (evolutionarily recent brain parts) need access to the basic "pixel-data" like things as well.
But again, maybe when we intentionally try to look at specific dots (as if looking at pixels, interpreting the visual field as a screen), we maybe aren't really looking at the low-level input but rather a reconstruction. Maybe we are seeing lines, corners and other geometric primitives laid on top of one another, like an SVG image, not like a BMP image. Maybe we don't really have conscious access to the low-level visual signals, we just have access to a reconstruction.
I don't think neuroscience has found out these things already, but it should be possible to read off of connections of brain areas.
I remember not really "getting" these illusions when I was a kid. I just didn't find them interesting, it looked too straightforward.
The idea of a "2D screen inside our head" is not our natural intuition. Before learning about these things, I just felt that I simply percieve the environment around me. I don't see a flat pixel grid in front of me when I walk around, I rather have a model of the environment that I continuously update and I percieve the objects "from where they are", just like I feel leg pain as if it were "in my leg", despite the fact that pain actually happens in the brain. I see objects where they are in the 3D model, not where they are on a virtual screen.
The screen and pixels analogy may be so prevalent in modern times because of the TV, photos or even earlier realistic paintings. But early art was not really realistic, which I think either shows they were
The second explanation seems more plausible to me.
These illusions are only illusions if you take the "2D screen and pixels" view of vision. Now that view is also important for technological applications, and it's also biologically relevant (retina cells are sort-of pixels), I'm just saying it's not really an illusion against builtin intuition.
One more time: the fact that those beliefs are in an order does not mean some of them are good and others are bad. For example, "5 year old child / pro-death / transhumanist" is a triad, and "warming denier / warming believer / warming skeptic" is a triad, but I personally support 1+3 in the first triad and 2 in the second. You can't evaluate the truth of a statement by its position in a signaling game; otherwise you could use human psychology to figure out if global warming is real!
Well worth stressing.
It's possible to go meta on nearly any issue, and there are a lot of meta-level arguments - group affiliation, signaling, rationalization, ulterior motives, whether a position is contrarian or supported by the majority, who the experts are and how much we should trust them, which group is persecuted the most, straw man positions and whether anybody really holds them, slippery slopes, different ways to interpret statements, who is working under which cognitive bias ...
Which is why I prefer discussions to stick to the object level rather than go meta. It's just too easy to rationalize a position in meta, and to find convincing-sounding arguments as to why the other side mistakenly disagrees with you. And meta-level disagreements are more likely to persist in the long run, because they are hard to verify.
Sure, meta-level arguments are very valuable in many cases, we shouldn't drop them altogether. But we should be very cautious while using them.
That's a triad too: naive instinctive signaling / signaling-aware people disliking signaling / signaling is actually a useful and necessary thing.
It does, and thank you for the reply.
How should we define "pleasure"? -- A difficult question. As you mention, it is a cloud of concepts, not a single one. It's even more difficult because there appears to be precious little driving the standardization of the word-- e.g., if I use the word 'chair' differently than others, it's obvious, people will correct me, and our usages will converge. If I use the word 'pleasure' differently than others, that won't be as obvious because it's a subjective experience, and there'll be much less convergence toward a common usage.
But I'd say that in practice, these problems tend to work themselves out, at least enough for my purposes. E.g., if I say "think of pure, unadulterated agony" to a room of 10000 people, I think the vast majority would arrive at fairly similar thoughts. Likewise, if I asked 10000 people to think of "pure, unadulterated bliss… the happiest moment in your life", I think most would arrive at thoughts which share certain attributes, and none (<.01%) would invert answers to these two questions.
I find this "we know it when we see it" definitional approach completely philosophically unsatisfying, but it seems to work well enough for my purposes, which is to find mathematical commonalities across brain-states people identify as 'pleasurable', and different mathematical commonalities across brain-states people identify as 'painful'.
I see what you mean by "the meaning of a word is hardly ever accurately given by any necessary-and-sufficient conditions that can be stated explicitly in a reasonable amount of space, because that just isn't the way human minds work." On the other hand, all words are imperfect and we need to talk about this somehow. How about this: (1) what are the characteristic mathematics of (i.e., found disproportionally in) self-identified pleasurable brain states?
"what are the characteristic mathematics of (i.e., found disproportionally in) self-identified pleasurable brain states?"
Certain areas of the brain get more active and certain hormones get into the bloodstream. How does this help you out?