Comment author: SaidAchmiz 14 July 2013 07:19:29AM *  1 point [-]

You are clearly an extrovert, and that's fine, but please refrain from speaking as if introverts are inherently inferior and incorrect. It's incredibly annoying and insulting.

Also, you say

People are already encouraged to be way too self-involved, isolated, and "individualistic".

And then you say

Doing things together is good, especially if they challenge you both (whether that's by temporary discomfort, new concepts, or whatever). If they don't want to be involved let them take responsibility for communicating that, because it is their responsibility.

Do you not see the irony of forcing yourself on other people, despite their wishes, and justifying this by saying that they're too self-involved?

Like RolfAndreassen said: please back the fuck off and leave others alone.

Comment author: Caspian 15 July 2013 03:05:28PM 4 points [-]

Like RolfAndreassen said: please back the fuck off and leave others alone.

Please stop discouraging people from introducing themselves to me in circumstances where it would be welcome.

Comment author: Kaj_Sotala 13 July 2013 02:58:32PM *  29 points [-]

Is willpower, in the short-term at least, a limited and depletable resource?

I felt that Robert Kurzban presented a pretty good argument against the "willpower as a resource" model in Why Everyone (Else) Is a Hypocrite:

[After criticizing studies trying to show that willpower is a resource that depends on glucose]

What about the more general notion that “willpower” is a “resource” that gets consumed or expended when one exerts self-control? First and foremost, let’s keep in mind that the idea is inconsistent with the most basic facts about how the mind works. The mind is an information-processing device. It’s not a hydraulic machine that runs out of water pressure or something like that. Of course it is a physical object, and of course it needs energy to operate. But mechanics is the wrong way to understand, or explain, its action, because changes in complex behavior are due to changes in information processing. The “willpower as resource” view abandons these intellectual gains of the cognitive revolution, and has no place in modern psychology. That leaves the question, of course, about what is going on in these studies.

Let’s back up for a moment and think about what the function of self-control might be. Taking the SATs, keeping your attention focused, and not eating cookies all feel more or less unpleasant, but it’s not like spraining your ankle or running a marathon, where the unpleasant sensations are easy to understand from a functional point of view. The feelings of discomfort are probably the output of modules designed to compute costs. When your ankle is sprained, putting weight on it is costly because you can damage it further. When you have been running for a long time, the chance of a major injury goes up. These sensations, then, are probably evolution’s way of getting you to keep your weight off the joint and stop doing all that running, respectively.

There’s nothing obviously analogous for not eating cookies or doing word problems. Why does it feel like something, anything at all, to (not) do these things? As we’ve seen, lots of other stuff happens in your head, all the time, and it doesn’t feel like anything. Further, given that it seems as if exerting self-control is a good thing, that is, that it generally leads to outcomes that might be expected to yield fitness benefits, you might expect that exerting self-control would feel good and easy. Why does it seem hard, and feel even harder over time? What is the sensation of “effort” designed to get you to do?

One reason it seems hard might derive from that fact that “exerting self-control” entails incurring immediate costs in various forms, and “effort” is the representation of these costs. Consider not eating a cookie. There are probably modules in your mind that are designed to compute the benefits of eating nice calorie packages. They’re wired up to the senses, designed to calculate just how good (in the evolutionary sense) eating the calorie package is. From the point of view of these modules, not eating the cookie is a cost, in particular, the lost calories in the cookie. So, the sensation of the effort of not eating it—”temptation”—is probably evolution’s way of getting you to eat the cookie, just as the sensation of pain is evolution’s way of getting you to stay off your sprained ankle. In both cases, the experience is the output of a module designed to compute costs.

The same argument applies to other opportunities, and they take various forms. In some experiments, subjects are told to ignore words flashing on a computer screen, something that feels quite effortful. Why? Well, not reading words on a screen carries a loss of information: What did those words say? A similar argument applies regarding Ariely’s work on decision making during sexual arousal, which we looked at earlier in this chapter. The reason that subjects respond to those survey questions when they are aroused is probably because the mechanisms designed to take advantage of mating opportunities are computing benefits in the environment, though they are being fooled by the fact that the images they are getting are pictures rather than actual people.

Is it also a cost to solve word problems? Sure, but the cost isn’t caloric. Solving word problems requires the use of certain fancy modules, and when one is doing one of these tasks, these modules are kept busy. This means that doing these tasks carries real (opportunity) costs: all the things that these modules could be doing but are not because they are engaged. It’s not unlike what happens when you start up some big piece of software on your computer: Other things suffer, necessarily. Starting up software carries these costs. Working on word problems, similarly, prevents you from using important modular systems from doing other tasks.

So, instead of a resource view, my view is that the issue is more of an effort monitor—an “effortometer”62 in the mind. My guess is that the reason it feels like something to pay close attention to something, solve hard problems, or avoid eating cookies is that doing these things is costly from the perspective of certain modules.63 The feeling of “mental effort,” on this view, is like a counter, adding up all these opportunity costs to determine if it’s worth continuing to do whatever one is doing.64 As these costs get higher—either because one is doing the task for a while, or for some other reason—the effortometer counts higher, giving rise to the sensation of effort, and also giving the impatient modules more and more of an edge.

If I’m working on word problems—but not getting anywhere—using my modules in this way isn’t doing much good, so maybe I should stop. Interestingly, as illustrated by the results of the studies described above, the effect seems to extend from one task to another, even if the tasks are quite different.

This idea suggests that a mechanism is needed that performs these computations, weighing the costs and benefits of doing tasks that make use of certain modules. Some modules are counting up these costs, and when the effortometer increases, there is less suppression of the short-term modules—it’s time to move on. So, it’s not “willpower” that’s exhausted—it’s that the ratio of costs to reward is too high to justify continuing. As Baumeister himself indicated, “it is adaptive to give up early on unsolvable problems. Persistence is, after all, only adaptive and productive when it leads to eventual success.”

The effortometer view suggests a way to “reset” or at least reduce the count. Suppose we give subjects a reward, such as a small gift, or even light praise; this ought to “reset” the counter, just as when a foraging animal’s time is rewarded by finding food morsels. Diane Tice and colleagues conducted some work in which some subjects were told not to think of a white bear,* and others were not. The idea was that not thinking of a white bear takes some “willpower,” and when you’ve just used your willpower, you have less of it left to use in the next task, which was drinking an unpleasant beverage. They found that if you have to suppress thinking of a white bear, you can’t drink as much of the awful Kool-Aid. So, that looks good for a “resource” model. Your willpower sponge has been squeezed out.

Some subjects were, however, given a small gift after suppressing thinking of a white bear. These subjects were able to drink just as much of the nasty stuff as those who were at liberty to think of as many white bears as they wanted. That is, their “willpower” seems to have been restored, making them able to endure the foul-tasting beverage.

These findings are very hard to accommodate with a “resource” model. If my self-control sponge is squeezed dry by not thinking of a white bear, a gift shouldn’t help me exert willpower—I’m all out of it. (And certainly the gift didn’t increase the amount of glucose in my body.) In contrast, this finding fits very well with the effortometer model. If the effortometer is monitoring reward, then a gift resets it, and ought to improve subsequent self-control tasks.

Elsewhere in the book (I forget where) he also notes that the easiest explanation for people to go low on willpower when hungry is simply that a situation where your body urgently needs food is a situation where your brain considers everything that’s not directly related to acquiring food to have a very high opportunity cost. It seems like a more elegant and realistic explanation than saying the common folk-psychological explanation that seems to suggest something like willpower being a resource that you lose when you’re hungry or tired. It’s more of a question of the evolutionary tradeoffs being different when you’re hungry or tired, which leads to different cognitive costs.

Comment author: Caspian 15 July 2013 02:48:55PM 1 point [-]

I now plan to split up long boring tasks into short tasks with a little celebration of completion as the reward after each one. I actually decided to try this after reading Don't Shoot the Dog, which I think I saw recommended on Less Wrong. It's got me a somewhat more productive weekend. If it does stop helping, I suspect it would be from the reward stopping being fun.

Comment author: Eugine_Nier 14 July 2013 08:44:39PM -1 points [-]

Pick a dire concern from the developed world today, now how would you explain to an average westerner ~200 years why that concern is dire.

Comment author: Caspian 15 July 2013 02:22:51PM 0 points [-]

Getting back to post-scarcity for people who choose not to work, and what resources they would miss out on, a big concern would be not having a home. Clearly this is much more of a concern than drinks on flights. The main reason it is not considered a dire concern is that people's ability to choose not to work is not considered that vital.

Comment author: moridinamael 08 July 2013 03:28:36PM 8 points [-]

This was my thought as well, but Harry would have had to be unreasonably sure that Dumbledore didn't have some kind of "de-Transfigure everything in sight" spell to use on him.

Comment author: Caspian 09 July 2013 12:26:20PM 2 points [-]

A second, hidden copy of himself could possibly use the time turner as soon as it was announced the ring was to be transfigured, and make sure Hermione was not in the ring, but I think Harry has better uses than that for as much time turning as he can get.

Comment author: lfghjkl 08 July 2013 06:48:35PM 7 points [-]

"I very much need to visit the washroom, and I would also like to change out of these pyjamas."

This is where he's going to be using the time-turner to pick up Hermione's transfigured body before Flitwick arrives.

The reason this works this time, is that he has already precommitted to doing so when he spent all those hours thinking until dinner the day before. The ring is a red herring.

Comment author: Caspian 09 July 2013 12:11:09PM 1 point [-]

My first thought was that she'd been transfigured into the pajamas, but I don't think that's likely. My theory is that when Harry slept in his bed it was the second time he'd been through that time period. The first time, he stayed invisible with transfigured Hermione in his possession, waited until woken-up Harry had finished being searched, gave her to woken-up Harry, then went back in time and went to bed.

Comment author: leplen 26 June 2013 01:05:34PM *  1 point [-]

I'd like to address just the claim here that you could provide instructions to a nanosystem with a speaker. If we assume that the frequency range of the speaker lines up with human hearing, and that our nanosystem is in water, then the smallest possible wavelength we can get from our speaker is on the order of 7cm.

lamda=v / f= 1500 m/s / 20 kHz

How can you provide instructions to a nanosystem with a signal whose linear dimension is on the order of cm? How can you precisely control something when your manipulator is orders or magnitude larger than the thing you're manipulating?

Comment author: Caspian 26 June 2013 02:58:31PM 3 points [-]

You can get microphones much smaller than 7 cm, and they can detect frequencies way lower than 20 kHz. There's no rule saying you need a large detector to pick up a signal with a large wavelength.

Comment author: mwengler 20 June 2013 01:20:28AM 0 points [-]

Women famously say "sometimes I just want to be listened to. Don't try to solve my problems, just show me that you care." When men do this, women say "yes, that's what I'm talking about" and attempt to reinforce that behavior, perhaps unconsciously.

The people that own the bodies that I find attractive are women. If you pay attention women will tell you what they need in order to want to have sex with you.

Evolutionary psychology does not generally leave us conscious of why we react socially the way we react. Who can deny the widespread nature of men acting in a set of ways to attract a woman sexually? Who can deny the widespread nature of women acting certain ways to attract men? That we do this because we want the other person to be attracted to us, and not "sincerely" really mean we all hold each other in contempt?

Does the fact that I hold an evolutionary psychological interpretation of what is going on and express that understanding in unromantic terms make me any more or less likely to hold the object of my affection in contempt?

Comment author: Caspian 23 June 2013 03:29:50AM 0 points [-]

Women famously say "sometimes I just want to be listened to. Don't try to solve my problems, just show me that you care."

I would interpret that as being specific to problems. There may also be women who would like feigned interest in dopey things they're into, or they may prefer to just discuss them with their girlfriends who are actually interested.

When men do this, women say "yes, that's what I'm talking about" and attempt to reinforce that behavior, perhaps unconsciously.

Explicitly saying this can be taken at face value, I think, but unconsciously reinforcing the behaviour may be meant to reinforce actual interested listening. You can't deduce which is the true preference.

Comment author: DavidAgain 19 June 2013 09:41:42PM 0 points [-]

Bit of a random question. Are you saying that in the system the person above me used is 'I am providing her incentives to benefit me in the form of believing I care about her life - and ultimately it leads to most benefit for me (sex) but also benefit for her (faked sympathy? sex? not clear from your account)

If you mean where do I draw the line in manipulation, this doesn't look like 'providing incentives', and given it involves open deception it looks more like trickery. Though frankly if I thought someone was trying to 'provide incentives' for a friend of mine to sleep with them, I'd advise my friend to run a mile. There's no absolute line here, but a good rule of thumb is provided by Terry Pratchett: don't treat people as things.

Comment author: Caspian 23 June 2013 03:04:59AM *  1 point [-]

When I buy stuff from people I don't know I'm mostly treating them as a means to an end. Not completely, because there are ways I'd try to be fair to a human that wouldn't apply to a thing, but to a larger extent than I would want in personal / social relationships.

Another rule of thumb I kind of like is: don't get people into interactions with you that they wouldn't want if they knew what you were doing. I feel like that probably encourages erring too far on the side of caution and altruism. But if you know the other person would prefer you to empathise when not interested rather than be silent, leave or criticise, it's allowed.

ETA: I'm interested in better guidelines, especially from people who get the distaste for manipulation.

Comment author: Eliezer_Yudkowsky 14 June 2013 09:46:33PM 13 points [-]

I should post separately about this at some point.

Suppose we have a Collective Judgment of Science system in which scientific karma enters the system at highly agreed-upon points, e.g. very well-replicated, significant findings. Is there a system with the following properties:

  • The karma entry points need not necessarily be the most trusted people. Let's say you made a significant discovery, but 70% of the field disagrees with most of your opinions, and someone who hasn't made a significant discovery is trusted by 95% of the people who make significant discoveries. We should perhaps believe the latter person over you; making one discovery is not proof of perfect epistemic reliability.

  • If someone goes rogue and endorses a thousand trolls, who in turn endorse a million trolls, the million trolls can do no more karmic damage / produce no more karmic distortion, than the original person.

  • If I make three significant discoveries or write three good papers, there is no incentive to spread those papers out over 3 pseudonyms, or coauthor them with 3 others, in terms of how much influence I will have afterward. There may potentially be some incentive to centralize, although this would also not be good.

  • Downvoting or strongly downvoting an idea that many reliable epistemic voters think is correct may potentially be taken as Bayesian evidence by the system that you sometimes downvote good ideas. It's probably worth distinguishing this from concluding that you sometimes upvote bad ideas, without separate evidence.

  • Rather than give people an incentive to waste labor by systematically downvoting everything that person X said, there is a centralized "I think this person is a complete idiot" button. After pressing this button, further systematic downvoting has no effect. Obviously the order of operations should not be significant here, i.e., this button must have as much effect as downvoting everything. Perhaps you might be asked to look at the person's 3 highest-karma nodes and asked if you really want to downvote those too (vs. an "I hate most but not all things you say" rating) given that indicating "I uniformly hate everything you say" may then potentially reflect poorly on your reliability.

  • Within these constraints, it should be generally true that one person who's gotten a large karma prize cannot outvote 100 people who were all endorsed by trusted epistemics with karma originating from sources outweighing that single prize.

  • We're okay with this system using terabytes or even petabytes of memory to scale, so long as it's not exabytes and it can compute updates in real time, or at least less than an hour.

  • Being able to run on upvotes and downvotes is great, failing that having people click on a 5-star level or a linear spectrum is about as much info as we should ask, since most users will not provide more info than this on most occasions. We could potentially have a standard 5-star scale which by leaving the mouse present for 5 seconds can go to 6 stars, or a 7-star rating which can be given once per month, or something. We can't ask users to rate along 3 separate dimensions.

  • We should take into account that some people have pickier standards and downvote more easily or upvote more rarely than others, or conversely someone who endorses almost everything is only providing discriminatory Bayesian evidence about a threshold on the low end of the quality scale.

  • We can suppose that nodes are clustered in a 3-level hierarchy by broadest area, subject, and subspecialization but probably shouldn't suppose any more clustering in the data than this. It's possible we shouldn't try to assess it at all.

  • A consequence of this system is that as a philosopher, you can potentially achieve great endorsement of your perspicacity, but only by convincing people who were upvoted by people who delivered well-replicated significant experimental results. This strikes me as a feature, not a bug. I don't know of any particularly better way to decide which philosophers are reliable.

  • It can potentially be possible to bet karma on predictions subject to definite settlement a la a prediction market, since this can only operate to increase reliability of the system. If an open question that people opinionated about is definitely settled, anyone who was bold in predicting a minority correct answer should have their karma in some way benefit. Again we do not want an incentive to create pseudonyms to get independent karma awards here. (We can perhaps imagine such a question-node as a single source which endorses everyone who endorsed its correct answer.)

  • Presentation ordering of new nodes takes into account a value-of-information calculation, not just the highest confidence in current karma. (Obviously, under such a calculation, more prolifically voting users will see more recent nodes. This is also fine.)

Comment author: Caspian 21 June 2013 08:52:29AM 1 point [-]

Not that I know of, but Advogato's trust metric limits the damage by a rogue endorser of many trolls with a calculation using maximum network flow. It doesn't allow for downvotes.

If you allow downvoting and blocking all of someone's nodes, that could be an incentive for the person to partition their publications into three pseudonyms, so that once the first is blocked, the others are still available.

Comment author: Qiaochu_Yuan 19 June 2013 04:31:57AM *  18 points [-]

I've been reading a little of the philosophical literature on decision theory lately, and at least some two-boxers have an intuition that I hadn't thought about before that Newcomb's problem is "unfair." That is, for a wide range of pairs of decision theories X and Y, you could imagine a problem which essentially takes the form "Omega punishes agents who use decision theory X and rewards agents who use decision theory Y," and this is not a "fair" test of the relative merits of the two decision theories.

The idea that rationalists should win, in this context, has a specific name: it's called the Why Ain'cha Rich defense, and I think what I've said above is the intuition powering counterarguments to it.

I'm a little more sympathetic to this objection than I was before delving into the literature. A complete counterargument to it should at least attempt to define what fair means and argue that Newcomb is in fact a fair problem. (This seems related to the issue of defining what a fair opponent is in modal combat.)

Comment author: Caspian 21 June 2013 04:30:15AM 1 point [-]

That's a good question. Here's a definition of "fair" aimed at UDT-type thought experiments:

The agent has to know what thought experiment they are in as background knowledge, so the universe can only predict their counterfactual actions in situations that are in that thought experiment, and where the agent still has the knowledge of being in the thought experiment.

This disallows my anti-oneboxer setup here: http://lesswrong.com/lw/hqs/why_do_theists_undergrads_and_less_wrongers_favor/97ak (because the predictor is predicting what decision would be made if the agent knew they were in Newcomb's problem, not what decision would be made if the agent knew they were in the anti-oneboxer experiment) but still allows Newcomb's problem, including the transparent box variation, and Parfit's Hitchhiker.

I don't think much argument is required to show Newcomb's problem is fair by this definition, the argument would be about deciding to use this definition of fair, rather than one that favours CDT, or one that favours EDT.

View more: Prev | Next